paper_url
stringlengths
35
81
arxiv_id
stringlengths
6
35
nips_id
float64
openreview_id
stringlengths
9
93
title
stringlengths
1
1.02k
abstract
stringlengths
0
56.5k
short_abstract
stringlengths
0
1.95k
url_abs
stringlengths
16
996
url_pdf
stringlengths
16
996
proceeding
stringlengths
7
1.03k
authors
listlengths
0
3.31k
tasks
listlengths
0
147
date
timestamp[ns]date
1951-09-01 00:00:00
2222-12-22 00:00:00
conference_url_abs
stringlengths
16
199
conference_url_pdf
stringlengths
21
200
conference
stringlengths
2
47
reproduces_paper
stringclasses
22 values
methods
listlengths
0
7.5k
https://paperswithcode.com/paper/an-advanced-reliability-reserve-incentivizes
2506.14664
null
null
An advanced reliability reserve incentivizes flexibility investments while safeguarding the electricity market
To ensure security of supply in the power sector, many countries are already using or discussing the introduction of capacity mechanisms. Two main types of such mechanisms include capacity markets and capacity reserves. Simultaneously, the expansion of variable renewable energy sources increases the need for power sector flexibility, for which there are promising yet often under-utilized options on the demand side. In this paper, we analyze how a centralized capacity market and an advanced reliability reserve with a moderately high activation price affect investments in demand-side flexibility technologies. We do so for a German case study of 2030, using an open-source capacity expansion model and incorporating detailed demand-side flexibility potentials across industry, process heat, and district heating. We show that a centralized capacity market effectively caps peak prices in the wholesale electricity market and thus reduces incentives for investments in demand-side flexibility options. The advanced reliability reserve induces substantially higher flexibility investments while leading to similar overall electricity supply costs and ensuring a similar level of security of supply. The advanced reliability reserve could thus create a learning environment for flexibility technologies to support the transition to climate neutral energy systems. Additionally, an advanced reliability reserve could be introduced faster and is more flexible than a centralized capacity market. While concrete design parameters are yet to be specified, we argue that policymakers should consider the reliability reserve concept in upcoming decision on capacity mechanisms in Germany and beyond.
null
https://arxiv.org/abs/2506.14664v1
https://arxiv.org/pdf/2506.14664v1.pdf
null
[ "Franziska Klaucke", "Karsten Neuhoff", "Alexander Roth", "Wolf-Peter Schill", "Leon Stolle" ]
[]
2025-06-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/numerical-evaluation-of-deliberative
2506.14102
null
null
Numerical evaluation of deliberative discussions of the UK food system: stimuli, demographics, and opinion reversion
There is increasing acknowledgement - including from the UK government - of the benefit of employing deliberative processes (deliberative fora, citizens' juries, etc.). Evidence suggests that the qualitative reporting of deliberative fora are often unclear or imprecise. If this is the case, their value to policymakers could be diminished. In this study we develop numerical methods of deliberative processes to document people's preferences, as a complement to qualitative analysis. Data are taken from the Food Conversation, a nationwide public consultation on reformations of the food system comprising 345 members of the general public. Each participant attended 5 workshops, each with differing stimuli covering subtopics of the food system. In each workshop, individuals twice reported responsibility, from 0-10, for changing the food system for 5 stakeholders (governments, the food industry, supermarkets, farmers, individuals). Analyses examined individuals' perceptions of food system change responsibility. Governments were most responsible and farmers least so. We assessed variation by workshop content, and by demographics. Reported responsibility changed most for individuals, and changed least for the food industry. We devise a model to document a reversion effect, where shifts in perceptions on responsibility that occurred during workshops waned over time; this was strongest among those who intended to vote (rather than not to). These results can support qualitative analyses and inform food system policy development. These methods are readily adopted for any such deliberative process, allowing for statistical evaluation of whether they can induce opinion change.
null
https://arxiv.org/abs/2506.14102v1
https://arxiv.org/pdf/2506.14102v1.pdf
null
[ "John Buckell", "Thomas Hancock" ]
[]
2025-06-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-pharmaceutical-price-regulation-crisis
2506.14849
null
null
The Pharmaceutical Price Regulation Crisis: Implications on Antidepressant Access for Low-Income Americans
Depression affects more than 280 million people worldwide, with poorer communities having disproportionate burden as well as barriers to treatment. This study examines the role of pharmacy pricing caps in access to antidepressants among poorer Americans through bibliometric analysis of the 100 most cited articles on antidepressant pricing and access in the Web of Science Core Collection. We used tools like Bibliometrix and VOSviewer to visualize publication trends, dominant contributors, thematic clusters, and citation networks in the literature. Findings highlight intransigent inequalities in access to antidepressants based on astronomically high drug pricing as well as systemic inequalities against racial and ethnic minorities in particular. Branded antidepressant high prices are associated with low initiation of therapy as well as regimen compliance, heightened mental illness outcomes, as well as increased health utilization. This work uncovers critical gaps in the literature and demands immediate policy action to make antidepressants affordable as well as appropriately accessible to marginalized communities.
null
https://arxiv.org/abs/2506.14849v1
https://arxiv.org/pdf/2506.14849v1.pdf
null
[ "Nicole Hodrosky", "Gabriel Cacho", "Faiza Ahmed", "Rohana Mudireddy", "Yapin Wen", "Kymora Nembhard", "Michael Yan" ]
[ "Articles" ]
2025-06-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-bones-and-shapes-of-the-phillips-curve
2506.14030
null
null
The Bones and Shapes of the Phillips Curve
The COVID-19 pandemic reignited debate on the U.S. Phillips curve. Using MSA-level panel data (2001-2024), we employ a Two-Stage Least Squares (2SLS) instrumental variable strategy with a shift-share instrument to estimate core non-tradable inflation's response to a v/u-based slack measure. We distinguish structural slope stability from state-dependent non-linearities via a threshold model. Our analysis addresses whether the slope of the Phillips Curve changed during and after the Pandemic in the United States by evaluating if recent inflation dynamics reflect an altered structural trade-off ("bones") or the activation of non-linear "shapes" in response to extreme labor market tightness. This distinction offers critical insights into the unemployment cost of disinflation.
null
https://arxiv.org/abs/2506.14030v1
https://arxiv.org/pdf/2506.14030v1.pdf
null
[ "Hanyuan Jiang" ]
[]
2025-06-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-anatomy-of-india-s-industrial
2506.13936
null
null
The Anatomy of India's Industrial Interdependencies: 7-Digit Product-Level Supply-Use and Input-Output Tables from ASI Data (2016-2022) with a Case Study of the Mobile Phone Sector
Outlines the construction of high-resolution, 7-digit product-level Supply-Use Tables (SUTs) and symmetric Input-Output Tables (IOTs) for the Indian economy, leveraging microdata from the Annual Survey of Industries (ASI) for the period 2016-2022. We delineate a robust methodology encompassing the generation of detailed input and output flows, with a particular focus on the reconciliation of data from registered and unregistered manufacturing sectors through a meticulously developed NPCMS-NIC concordance. The critical transformation from the often-rectangular SUTs to square, symmetric product-by-product IOTs is explicated using the Industry Technology Assumption, a choice justified by its suitability for handling by-products prevalent in a diverse manufacturing landscape. The analytical prowess of this newly constructed high-resolution IOT framework is then demonstrated through its application to assess key economic impacts, specifically the Domestic Value Added (DVA) generated and the employment supported by production and export activities. A detailed case study of India's rapidly evolving mobile phone manufacturing sector (NPCMS 4722200) for the 2016-2022 period reveals profound structural shifts: significant output growth coupled with notable import substitution, a remarkable surge in exports, and a dynamic evolution in the DVA versus Foreign Value Added (FVA) shares, particularly in export-oriented production. The analysis further uncovers substantial employment growth, albeit with an increasing reliance on contractual labour and a heartening rise in female participation in the workforce. These meticulously constructed tables represent a significant methodological advancement and provide an invaluable empirical resource for nuanced analysis of sectoral interdependencies, the efficacy of industrial policy, and the complex dynamics of India's engagement with global value chains.
null
https://arxiv.org/abs/2506.13936v1
https://arxiv.org/pdf/2506.13936v1.pdf
null
[ "Sourish Dutta" ]
[]
2025-06-16T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/life-cycle-assessment-tools-for-road-design
2506.13896
null
null
Life cycle assessment tools for road design: analysing linearity assumptions
Road infrastructure significantly impacts how people move and live and the emissions associated with travel behaviour. The design of roads is crucial in mitigating emissions. This paper reviews existing transport life cycle assessment tools that have been developed by various entities and can be used for roads. The review focuses on data sources used in the analysis, methods of estimating carbon dioxide emissions, the underlying software that is used to make the estimates, and any limitations of the tools. A critical issue identified in life cycle assessment analysis is the erroneous assumption that relationships within the assessed systems are linear. The current tools focusing on transport infrastructure assessment were developed based on the linear assumptions and limitations of the life cycle assessment analysis. A significant research gap identified is that existing life cycle assessment tools are not integrated with the design process. The analysis is an add-on process to design and the results of an assessment are not then used iteratively to enhance the design. A case study on aggregate road design found that road area significantly correlates with emissions, slope adjustments reduce emissions, and soil type impacts emissions, suggesting future research should explore non-linear relationships for sustainable road design.
null
https://arxiv.org/abs/2506.13896v1
https://arxiv.org/pdf/2506.13896v1.pdf
null
[ "Nikolaos Kalyviotis" ]
[]
2025-06-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/varying-reference-point-salience
2506.13382
null
null
Varying reference-point salience
The salience of reference points and expectations may significantly influence the loss aversion mechanism in effort provision. We exploit a natural experiment where highly professional and incentivized individuals perform their task in a setting with exogenous variation of reference-point salience. While a relevant reference point is salient in some cases, where it influences individuals' expectations, it is obscured in others. This enables us to examine the interplay between reference-point salience and expectation-based loss aversion in shaping effort provision. Exploiting quasi-random variation around the reference point, our regression discontinuity analyses reveal that individuals with positive expectations outperform those with negative expectations only when the reference point is salient.
null
https://arxiv.org/abs/2506.13382v1
https://arxiv.org/pdf/2506.13382v1.pdf
null
[ "Alex Krumer", "Felix Otto", "Tim Pawlowski" ]
[]
2025-06-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/large-language-models-as-hidden-persuaders
2506.13313
null
null
Large Language Models as 'Hidden Persuaders': Fake Product Reviews are Indistinguishable to Humans and Machines
Reading and evaluating product reviews is central to how most people decide what to buy and consume online. However, the recent emergence of Large Language Models and Generative Artificial Intelligence now means writing fraudulent or fake reviews is potentially easier than ever. Through three studies we demonstrate that (1) humans are no longer able to distinguish between real and fake product reviews generated by machines, averaging only 50.8% accuracy overall - essentially the same that would be expected by chance alone; (2) that LLMs are likewise unable to distinguish between fake and real reviews and perform equivalently bad or even worse than humans; and (3) that humans and LLMs pursue different strategies for evaluating authenticity which lead to equivalently bad accuracy, but different precision, recall and F1 scores - indicating they perform worse at different aspects of judgment. The results reveal that review systems everywhere are now susceptible to mechanised fraud if they do not depend on trustworthy purchase verification to guarantee the authenticity of reviewers. Furthermore, the results provide insight into the consumer psychology of how humans judge authenticity, demonstrating there is an inherent 'scepticism bias' towards positive reviews and a special vulnerability to misjudge the authenticity of fake negative reviews. Additionally, results provide a first insight into the 'machine psychology' of judging fake reviews, revealing that the strategies LLMs take to evaluate authenticity radically differ from humans, in ways that are equally wrong in terms of accuracy, but different in their misjudgments.
null
https://arxiv.org/abs/2506.13313v1
https://arxiv.org/pdf/2506.13313v1.pdf
null
[ "Weiyao Meng", "John Harvey", "James Goulding", "Chris James Carter", "Evgeniya Lukinova", "Andrew Smith", "Paul Frobisher", "Mina Forrest", "Georgiana Nica-Avram" ]
[]
2025-06-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dynamic-reinsurance-treaty-bidding-via-multi
2506.13113
null
null
Dynamic Reinsurance Treaty Bidding via Multi-Agent Reinforcement Learning
This paper develops a novel multi-agent reinforcement learning (MARL) framework for reinsurance treaty bidding, addressing long-standing inefficiencies in traditional broker-mediated placement processes. We pose the core research question: Can autonomous, learning-based bidding systems improve risk transfer efficiency and outperform conventional pricing approaches in reinsurance markets? In our model, each reinsurer is represented by an adaptive agent that iteratively refines its bidding strategy within a competitive, partially observable environment. The simulation explicitly incorporates institutional frictions including broker intermediation, incumbent advantages, last-look privileges, and asymmetric access to underwriting information. Empirical analysis demonstrates that MARL agents achieve up to 15% higher underwriting profit, 20% lower tail risk (CVaR), and over 25% improvement in Sharpe ratios relative to actuarial and heuristic baselines. Sensitivity tests confirm robustness across hyperparameter settings, and stress testing reveals strong resilience under simulated catastrophe shocks and capital constraints. These findings suggest that MARL offers a viable path toward more transparent, adaptive, and risk-sensitive reinsurance markets. The proposed framework contributes to emerging literature at the intersection of algorithmic market design, strategic bidding, and AI-enabled financial decision-making.
null
https://arxiv.org/abs/2506.13113v1
https://arxiv.org/pdf/2506.13113v1.pdf
null
[ "Stella C. Dong", "James R. Finlay" ]
[ "Multi-agent Reinforcement Learning", "reinforcement-learning", "Reinforcement Learning" ]
2025-06-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/inequality-s-economic-and-social-roots-the
2506.13016
null
null
Inequality's Economic and Social Roots: the Role of Social Networks and Homophily
I discuss economic and social sources of inequality and elaborate on the role of social networks in inequality, economic immobility, and economic inefficiencies. The lens of social networks clarifies how the entanglement of people's information, opportunities, and behaviors with those of their friends and family leads to persistent differences across communities, resulting in inequality in education, employment, income, health, and wealth. The key role of homophily in separating groups within the network is highlighted. A network perspective's policy implications differ substantially from a narrower economic perspective that ignores social structure. I discuss the importance of ``policy cocktails'' that include aspects that are aimed at both the economic and social forces driving inequality.
null
https://arxiv.org/abs/2506.13016v1
https://arxiv.org/pdf/2506.13016v1.pdf
null
[ "Matthew O. Jackson" ]
[]
2025-06-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/parenthood-penalty-in-russia-evidence-from
2506.11858
null
null
Parenthood Penalty in Russia: Evidence from Exogenous Variation in Family Size
The present study aimed to improve upon the existing correlational literature on the parenthood penalty in Russia. An instrumental variables approach based on sibling sex composition and multiple births was employed alongside difference-in-differences designs to analyze rich census and longitudinal datasets. To the best of the authors' knowledge, this is the first study to provide causal estimates of the effect of fertility decisions on subsequent labor market outcomes for mothers and fathers in contemporary Russia. The study's primary finding is that, in contrast to the approximately 10 percent long-term motherhood penalty observed in developed countries, the causal impact of childbearing on women's employment in Russia is most significant in the first year after birth, reducing employment by around 15 percent. This penalty then rapidly declines to a modest 3 percent once children reach school age. The analysis indicates an absence of a systematic fatherhood penalty in terms of employment, although a modest increase in labor supply is observed.
null
https://arxiv.org/abs/2506.11858v1
https://arxiv.org/pdf/2506.11858v1.pdf
null
[ "Vadim Ustyuzhanin" ]
[]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/digital-payment-policy-impact-analysis-on-the
2506.11695
null
null
Digital payment policy impact analysis on the intention to use QRIS (quick response code Indonesian standard) during COVID-19 pandemic
This study aims to evaluate the adoption of Bank Indonesia's QRIS (Quick Response code Indonesian Standard) payment system policy. The evaluation is hindered by the contemporaneous emergence of the COVID-19 pandemic, which acts as a confounding factor in adopting the new payment instrument. To disentangle the impact of central bank policy from the pandemic, a novel variation of the model of Unified Theory of Acceptance and Use of Technology (UTAUT) is proposed and is estimated using purposive sampling from an online survey with 572 respondents during the pandemic. The result of the study successfully disentangles the policy effect from the pandemic effect, and also separate the risk of pandemic with common risks (PR) and other technology adoption determinants. The results indicate that perceived central bank policy and pandemic risk are the most influential variables affecting the intention to use QRIS. The findings suggest that this measurement approach can be appropriately used as a complementary tool to examine the effectiveness of the central bank's policy in influencing people's behavior.
null
https://arxiv.org/abs/2506.11695v1
https://arxiv.org/pdf/2506.11695v1.pdf
null
[ "Wishnu Badrawani" ]
[]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/econgym-a-scalable-ai-testbed-with-diverse
2506.12110
null
null
EconGym: A Scalable AI Testbed with Diverse Economic Tasks
Artificial intelligence (AI) has become a powerful tool for economic research, enabling large-scale simulation and policy optimization. However, applying AI effectively requires simulation platforms for scalable training and evaluation-yet existing environments remain limited to simplified, narrowly scoped tasks, falling short of capturing complex economic challenges such as demographic shifts, multi-government coordination, and large-scale agent interactions. To address this gap, we introduce EconGym, a scalable and modular testbed that connects diverse economic tasks with AI algorithms. Grounded in rigorous economic modeling, EconGym implements 11 heterogeneous role types (e.g., households, firms, banks, governments), their interaction mechanisms, and agent models with well-defined observations, actions, and rewards. Users can flexibly compose economic roles with diverse agent algorithms to simulate rich multi-agent trajectories across 25+ economic tasks for AI-driven policy learning and analysis. Experiments show that EconGym supports diverse and cross-domain tasks-such as coordinating fiscal, pension, and monetary policies-and enables benchmarking across AI, economic methods, and hybrids. Results indicate that richer task composition and algorithm diversity expand the policy space, while AI agents guided by classical economic methods perform best in complex settings. EconGym also scales to 10k agents with high realism and efficiency.
null
https://arxiv.org/abs/2506.12110v1
https://arxiv.org/pdf/2506.12110v1.pdf
null
[ "Qirui Mi", "Qipeng Yang", "Zijun Fan", "Wentian Fan", "Heyang Ma", "Chengdong Ma", "Siyu Xia", "Bo An", "Jun Wang", "Haifeng Zhang" ]
[ "Benchmarking" ]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/experimenting-with-networks
2506.11313
null
null
Experimenting with Networks
We provide an overview of methods for designing and implementing experiments (field, lab, hybrid, and natural) when there are networks of interactions between subjects.
null
https://arxiv.org/abs/2506.11313v1
https://arxiv.org/pdf/2506.11313v1.pdf
null
[ "Arun G. Chandrasekhar", "Matthew O. Jackson" ]
[]
2025-06-12T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/modern-approaches-to-building-effective
2506.15723
null
null
Modern approaches to building effective interpretable models of the property market using machine learning
In this article, we review modern approaches to building interpretable models of property markets using machine learning on the base of mass valuation of property in the Primorye region, Russia. The researcher, lacking expertise in this topic, encounters numerous difficulties in the effort to build a good model. The main source of this is the huge difference between noisy real market data and ideal data which is very common in all types of tutorials on machine learning. This paper covers all stages of modeling: the collection of initial data, identification of outliers, the search and analysis of patterns in data, the formation and final choice of price factors, the building of the model, and the evaluation of its efficiency. For each stage, we highlight potential issues and describe sound methods for overcoming emerging difficulties on actual examples. We show that the combination of classical linear regression with interpolation methods of geostatistics allows to build an effective model for land parcels. For flats, when many objects are attributed to one spatial point the application of geostatistical methods is difficult. Therefore we suggest linear regression with automatic generation and selection of additional rules on the base of decision trees, so called the RuleFit method. Thus we show, that despite the strong restriction as the requirement of interpretability which is important in practical aspects, for example, legal matters, it is still possible to build effective models of real property markets.
null
https://arxiv.org/abs/2506.15723v1
https://arxiv.org/pdf/2506.15723v1.pdf
null
[ "Irina G. Tanashkina", "Alexey S. Tanashkin", "Alexander S. Maksimchuik", "Anna Yu. Poshivailo" ]
[]
2025-06-05T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\\hat{y} = \\textbf{X}\\hat{\\beta}$ and actual values $y$: $\\left(y-\\textbf{X}\\beta\\right)^{2}$.\r\n\r\nWe can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\\hat{\\beta}$.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)", "full_name": "Linear Regression", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.", "name": "Generalized Linear Models", "parent": null }, "name": "Linear Regression", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "", "full_name": "Balanced Selection", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Active Learning", "parent": null }, "name": "BASE", "source_title": "Active Learning at the ImageNet Scale", "source_url": "https://arxiv.org/abs/2111.12880v1" } ]
https://paperswithcode.com/paper/non-discriminatory-personalized-pricing
2506.20925
null
null
Non-Discriminatory Personalized Pricing
A monopolist offers personalized prices to consumers with unit demand, heterogeneous values, and idiosyncratic costs, who differ in a protected characteristic, such as race or gender. The seller is subject to a non-discrimination constraint: consumers with the same cost, but different characteristics must face identical prices. Such constraints arise in regulated markets like credit or insurance. The setting reduces to an optimal transport, and we characterize the optimal pricing rule. Under this rule, consumers may retain surplus, and either group may benefit. Strengthening the constraint to cover transaction prices redistributes surplus, harming the low-value group and benefiting the high-value group.
null
https://arxiv.org/abs/2506.20925v1
https://arxiv.org/pdf/2506.20925v1.pdf
null
[ "Philipp Strack", "Kai Hao Yang" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/reasoning-about-bounded-reasoning
2506.19737
null
null
Reasoning about Bounded Reasoning
Interactive decision-making relies on strategic reasoning. Two prominent frameworks are (1) models of bounded reasoning, exemplified by level-$k$ models, which keep reasoning implicit, and (2) epistemic game theory, which makes reasoning explicit. We connect these approaches by "lifting" static complete-information games into incomplete-information settings where payoff types reflect players' reasoning depths as in level-$k$ models. We introduce downward rationalizability, defined via minimal belief restrictions capturing the basic idea common to level-$k$ models, to provide robust yet well-founded predictions in games where bounded reasoning matters. We then refine these belief restrictions to analyze the foundations of two seminal models of bounded reasoning: the classic level-$k$ model and the cognitive hierarchy model. Our findings shed light on the distinction between hard cognitive bounds on reasoning and beliefs about co-players' types. Furthermore, they offer insights into robustness issues relevant for market design. Thus, our approach unifies key level-$k$ models building on clear foundations of strategic reasoning stemming from epistemic game theory.
null
https://arxiv.org/abs/2506.19737v1
https://arxiv.org/pdf/2506.19737v1.pdf
null
[ "Shuige Liu", "Gabriel Ziegler" ]
[]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/asymptotic-equilibrium-analysis-of-the-boston
2506.19450
null
null
Asymptotic Equilibrium Analysis of the Boston Mechanism
We analyze the performance of the Boston mechanism under equilibrium play in uniform random markets. We provide two results. First, while the share of students assigned to their first preference is 63% under truthfulness, this fraction becomes vanishingly small in any Nash equilibrium of the preference revelation game induced by the Boston mechanism. Second, we show that there is a Nash equilibrium of the corresponding preference revelation game where the average student is assigned to a highly undesirable school-dramatically worse ranked than the logarithmic rank achieved under truthfulness.
null
https://arxiv.org/abs/2506.19450v1
https://arxiv.org/pdf/2506.19450v1.pdf
null
[ "Josue Ortega" ]
[]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/visibly-fair-mechanisms
2506.19176
null
null
Visibly Fair Mechanisms
Priority-based allocation of individuals to positions are pervasive, and elimination of justified envy is often, an absolute requirement. This leaves serial dictatorship (SD) as the only rule that avoids justified envy under standard direct mechanisms. What if SD outcomes are undesirable from a designer's perspective? We propose visible fairness, which demands fairness relative to the (potentially purposefully incomplete) preference information the mechanism elicits. Visibly fair mechanisms generalize SD; we fully characterize them and provide necessary and sufficient conditions for strategy-proofness. We show how to apply these results to design strategy-proof visibly fair rules that satisfy a broad class of distributional objectives. Visible fairness, however, results in a new information-efficiency trade-off: meeting distributional goals leads to the avoidance of eliciation of information about preferences that could prevent inefficiencies.
null
https://arxiv.org/abs/2506.19176v1
https://arxiv.org/pdf/2506.19176v1.pdf
null
[ "Inácio Bó", "Gian Caspari", "Manshu Khanna" ]
[ "Fairness" ]
2025-06-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/disaster-risk-financing-through-taxation-a
2506.18895
null
null
Disaster Risk Financing through Taxation: A Framework for Regional Participation in Collective Risk-Sharing
We consider an economy composed of different risk profile regions wishing to be hedged against a disaster risk using multi-region catastrophe insurance. Such catastrophic events inherently have a systemic component; we consider situations where the insurer faces a non-zero probability of insolvency. To protect the regions against the risk of the insurer's default, we introduce a public-private partnership between the government and the insurer. When a disaster generates losses exceeding the total capital of the insurer, the central government intervenes by implementing a taxation system to share the residual claims. In this study, we propose a theoretical framework for regional participation in collective risk-sharing through tax revenues by accounting for their disaster risk profiles and their economic status.
null
https://arxiv.org/abs/2506.18895v1
https://arxiv.org/pdf/2506.18895v1.pdf
null
[ "Fallou Niakh", "Arthur Charpentier", "Caroline Hillairet", "Philipp Ratz" ]
[]
2025-06-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/broad-validity-of-the-first-order-approach-in
2506.18873
null
null
Broad Validity of the First-Order Approach in Moral Hazard
The first-order approach (FOA) is the main tool for the moral hazard principal-agent problem. Although many existing results rely on the FOA, its validity has been established only under relatively restrictive assumptions. We demonstrate in examples that the FOA frequently fails when the agent's reservation utility is low (such as in principal-optimal contracts). However, the FOA broadly holds when the agent's reservation utility is at least moderately high (such as in competitive settings where agents receive high rents). Our main theorem formalizes this point. The theorem shows that the FOA is valid in a standard limited liability model when the agent's reservation utility is sufficiently high. The theorem also establishes existence and uniqueness of the optimal contract. We use the theorem to derive tractable optimal contracts across several settings. Under log utility, option contracts are optimal for numerous common output distributions (including Gaussian, exponential, binomial, Gamma, and Laplace).
null
https://arxiv.org/abs/2506.18873v1
https://arxiv.org/pdf/2506.18873v1.pdf
null
[ "Eduardo Azevedo", "Ilan Wolff" ]
[ "valid" ]
2025-06-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/interim-correlated-rationalizability-in-large
2506.18426
null
null
Interim correlated rationalizability in large games
We provide general theoretical foundations for modeling strategic uncertainty in large distributional Bayesian games with general type spaces, using a version of interim correlated rationalizability. We then focus on the case in which payoff functions are supermodular in actions, as is common in the literature on global games. This structure allows us to identify extremal interim correlated rationalizable solutions with extremal interim Bayes-Nash equilibria. Notably, no order structure on types is assumed. We illustrate our framework and results using the large versions of the electronic mail game and a global game.
null
https://arxiv.org/abs/2506.18426v1
https://arxiv.org/pdf/2506.18426v1.pdf
null
[ "Lukasz Balbus", "Michael Greinecker", "Kevin Reffett", "Lukasz Wozny" ]
[]
2025-06-23T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/an-axiomatization-of-the-random-priority-rule
2506.17997
null
null
An Axiomatization of the Random Priority Rule
We study the problem of assigning indivisible objects to agents where each is to receive at most one. To ensure fairness in the absence of monetary compensation, we consider random assignments. Random Priority, also known as Random Serial Dictatorship, is characterized by equal-treatment-of-equals, ex-post efficiency and probabilistic (Maskin) monotonicity -- whenever preferences change so that a given deterministic assignment is ranked weakly higher by all agents, the probability of that assignment arising should be weakly larger. Probabilistic monotonicity implies strategy-proofness (in a stochastic dominance sense) for random assignment problems and is equivalent to it on the universal domain of strict preferences; for deterministic rules it coincides with Maskin monotonicity.
null
https://arxiv.org/abs/2506.17997v1
https://arxiv.org/pdf/2506.17997v1.pdf
null
[ "Christian Basteck" ]
[ "Fairness" ]
2025-06-22T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/network-heterogeneity-and-value-of
2506.17660
null
null
Network Heterogeneity and Value of Information
This paper studies how payoff heterogeneity affects the value of information in beauty contest games. I show that public information provision is detrimental to welfare if and only if agents' Katz-Bonacich centralities exhibit specific forms of heterogeneity, stemming from the network of coordination motives. A key insight is that agents may value the commonality of information so differently that some are harmed by their neighbors knowing what others know. Leveraging this insight, I also show that when the commonality of information is endogenously determined through information sharing, the equilibrium degree of information sharing can be inefficiently low, even without sharing costs.
null
https://arxiv.org/abs/2506.17660v1
https://arxiv.org/pdf/2506.17660v1.pdf
null
[ "Kota Murayama" ]
[]
2025-06-21T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/capturing-misalignment
2506.17176
null
null
Capturing Misalignment
We introduce and formalize misalignment, a phenomenon of interactive environments perceived from an analyst's perspective where an agent holds beliefs about another agent's beliefs that do not correspond to the actual beliefs of the latter. We demonstrate that standard frameworks, such as type structures, fail to capture misalignment, necessitating new tools to analyze this phenomenon. To this end, we characterize misalignment through non-belief-closed state spaces and introduce agent-dependent type structures, which provide a flexible tool to understand the varying degrees of misalignment. Furthermore, we establish that appropriately adapted modal operators on agent-dependent type structures behave consistently with standard properties, enabling us to explore the implications of misalignment for interactive reasoning. Finally, we show how speculative trade can arise under misalignment, even when imposing the corresponding assumptions that rule out such trades in standard environments.
null
https://arxiv.org/abs/2506.17176v1
https://arxiv.org/pdf/2506.17176v1.pdf
null
[ "Pierfrancesco Guarino", "Gabriel Ziegler" ]
[]
2025-06-20T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-house-monotone-coherent-and-droop
2506.12318
null
null
A House Monotone, Coherent, and Droop Proportional Ranked Candidate Voting Method
A Ranked candidate voting method based on Phragmen's procedure is described that can be used to produce a top-down proportional candidate list. The method complies with the Droop proportionality criterion satisfied by Single Transferable Vote. It also complies with house monotonicity and coherence, which are the ranked-candidate analogs of the divisor methods properties of always avoiding the Alabama and New State paradoxes. The highest ranked candidate in the list is the Instant Runoff winner, which is in at least one Droop proportional set of N winners for all N.
null
https://arxiv.org/abs/2506.12318v2
https://arxiv.org/pdf/2506.12318v2.pdf
null
[ "Ross Hyman" ]
[]
2025-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/do-you-know-what-i-mean-a-syntactic
2506.16901
null
null
Do You Know What I Mean? A Syntactic Representation for Differential Bounded Awareness
Without the assumption of complete, shared awareness, it is necessary to consider communication between agents who may entertain different representations of the world. A syntactic (language-based) approach provides powerful tools to address this problem. In this paper, we define translation operators between two languages which provide a ``best approximation'' for the meaning of propositions in the target language subject to its expressive power. We show that, in general, the translation operators preserve some, but not all, logical operations. We derive necessary and sufficient conditions for the existence of a joint state space and a joint language, in which the subjective state spaces of each agent, and their individual languages, may be embedded. This approach allows us to compare languages with respect to their expressiveness and thus, with respect to the properties of the associated state space.
null
https://arxiv.org/abs/2506.16901v1
https://arxiv.org/pdf/2506.16901v1.pdf
null
[ "Ani Guerdjikova", "Evan Piermont", "John Quiggin" ]
[ "Translation" ]
2025-06-20T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ai-plays-d-rationality-games-with-nash
2506.16467
null
null
AI Plays? δ-Rationality Games with Nash Equilibrium as Special Case
A distortion function, which captures the payoff gap between a player's actual payoff and her true payoff, is introduced and used to analyze games. In our proposed framework, we argue that players' actual payoff functions should be used to explain and predict their behaviors, while their true payoff functions should be used to conduct welfare analysis of the outcomes.
null
https://arxiv.org/abs/2506.16467v1
https://arxiv.org/pdf/2506.16467v1.pdf
null
[ "Fang-Fang Tang", "Yongsheng Xu" ]
[]
2025-06-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/two-person-cooperative-games-with-delta
2506.16465
null
null
Two-Person Cooperative Games with delta-Rationality
A player's payoff is modeled as consisting of two parts: a rational-value part and a distortion-value part. It is argued that the (total) payoff function should be used to explain and predict the behaviors of the players, while the rational value function should be used to conduct welfare analysis of the final outcome. We use the Nash demand game to illustrate our model.
null
https://arxiv.org/abs/2506.16465v1
https://arxiv.org/pdf/2506.16465v1.pdf
null
[ "Fang-Fang Tang", "Yongsheng Xu" ]
[]
2025-06-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-implementability-of-liberalism
2506.16059
null
null
The Implementability of Liberalism
This note shows that under the unrestricted domain, there exists a choice liberal and Nash implementable social choice rule if and only if there are at least three players and the outcome set is at least twice as large as the player set. A social choice rule is choice liberal if and only if for every player, there exists at least one pair of outcomes such that if this player strictly prefers one over the other, the one he prefers is socially desirable and the other one is not. A social choice rule is Nash implementable if and only if there exists a mechanism such that at every preference profile, the set of Nash equilibrium outcomes coincides with the set of socially desirable ones. The proof constructs an intuitive Nash implementing mechanism.
null
https://arxiv.org/abs/2506.16059v1
https://arxiv.org/pdf/2506.16059v1.pdf
null
[ "Héctor Hermida-Rivera" ]
[]
2025-06-19T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/learning-in-random-utility-models-via-online-1
2506.16030
null
null
Learning in Random Utility Models Via Online Decision Problems
This paper examines the Random Utility Model (RUM) in repeated stochastic choice settings where decision-makers lack full information about payoffs. We propose a gradient-based learning algorithm that embeds RUM into an online decision-making framework. Our analysis establishes Hannan consistency for a broad class of RUMs, meaning the average regret relative to the best fixed action in hindsight vanishes over time. We also show that our algorithm is equivalent to the Follow-The-Regularized-Leader (FTRL) method, offering an economically grounded approach to online optimization. Applications include modeling recency bias and characterizing coarse correlated equilibria in normal-form games
null
https://arxiv.org/abs/2506.16030v1
https://arxiv.org/pdf/2506.16030v1.pdf
null
[ "Emerson Melo" ]
[ "Decision Making" ]
2025-06-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/minimal-stable-voting-rules
2506.15323
null
null
Minimal Stable Voting Rules
In this paper, I characterize minimal stable voting rules and minimal self-stable constitutions (i.e., pairs of voting rules) for societies in which only power matters. To do so, I first let players' preference profiles over voting rules satisfy four natural axioms commonly used in the analysis of power: non-dominance, anonymity, null player and swing player. I then provide simple notions of minimal stability and minimal self-stability, and show that the families of minimal stable voting rules and minimal self-stable constitutions are fairly small. Finally, I conclude that political parties have evolved to ensure the minimal self-stability of otherwise not minimal self-stable constitutions.
null
https://arxiv.org/abs/2506.15323v1
https://arxiv.org/pdf/2506.15323v1.pdf
null
[ "Héctor Hermida-Rivera" ]
[]
2025-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/self-equivalent-voting-rules
2506.15310
null
null
Self-Equivalent Voting Rules
In this paper, I introduce a novel stability axiom for stochastic voting rules, called self-equivalence, by which a society considering whether to replace its voting rule using itself will choose not do so. I then show that under the unrestricted strict preference domain, a voting rule satisfying the democratic principles of anonymity, optimality, monotonicity, and neutrality is self-equivalent if and only if it assigns to every voter equal probability of being a dictator (i.e., uniform random dictatorship). Thus, any society that desires stability and adheres to the aforementioned democratic principles is bound to either employ the uniform random dictatorship or decide whether to change its voting rule using a voting rule other than itself.
null
https://arxiv.org/abs/2506.15310v1
https://arxiv.org/pdf/2506.15310v1.pdf
null
[ "Héctor Hermida-Rivera" ]
[]
2025-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/binary-self-selective-voting-rules
2506.15265
null
null
Binary Self-Selective Voting Rules
This paper introduces a novel binary stability property for voting rules-called binary self-selectivity-by which a society considering whether to replace its voting rule using itself in pairwise elections will choose not to do so. In Theorem 1, we show that a neutral voting rule is binary self-selective if and only if it is universally self-selective. We then use this equivalence to show, in Corollary 1, that under the unrestricted strict preference domain, a unanimous and neutral voting rule is binary self-selective if and only if it is dictatorial. In Theorem 2 and Corollary 2, we show that whenever there is a strong Condorcet winner; a unanimous, neutral and anonymous voting rule is binary self-selective (or universally self-selective) if and only if it is the Condorcet voting rule.
null
https://arxiv.org/abs/2506.15265v1
https://arxiv.org/pdf/2506.15265v1.pdf
null
[ "Héctor Hermida-Rivera", "Toygar T. Kerman" ]
[]
2025-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/efficient-reallocation-of-indivisible
2506.15169
null
null
Efficient reallocation of indivisible resources: Pair-efficiency versus Pareto-efficiency
In the object reallocation problem, achieving Pareto-efficiency is desirable, but may be too demanding for implementation purposes. In contrast, pair-efficiency, which is the minimal efficiency requirement, is more suitable. Despite being a significant relaxation, however, pair-efficiency ensures Pareto-efficiency for any strategy-proof and individually rational rule when agents' preferences are unrestricted. What if agents' preferences have specific restricted structures, such as single-peakedness or single-dippedness? We often encounter such situations in real-world scenarios. This study aims to investigate whether pair-efficiency is sufficient to ensure Pareto-efficiency in such cases. Our main contribution in this paper is establishing the equivalence between pair-efficiency and Pareto-efficiency when dealing with single-peaked or single-dipped preference profiles. This equivalence holds without needing to assume any other properties of the rule. We further show that both the single-peaked domain and the single-dipped domain are the "maximal" domains where this equivalence holds.
null
https://arxiv.org/abs/2506.15169v1
https://arxiv.org/pdf/2506.15169v1.pdf
null
[ "Pinaki Mandal" ]
[]
2025-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/upstream-competition-and-exclusive-content
2506.15063
null
null
Upstream competition and exclusive content provision in media markets
With a multilateral vertical contracting model of media markets, we examine upstream competition and contractual arrangements in content provision. We analyze the trade of content by the Nash bargaining solution and the downstream competition by the Hotelling location model. We characterize the equilibrium outcomes and the contractual arrangements for various vertical structures. We show that the possibility of exclusive contracts rises when the value of the premium content increases, the degree of horizontal differentiation in the downstream market decreases, the importance of advertising revenue decreases, and the relative bargaining power of upstream firm decreases.
null
https://arxiv.org/abs/2506.15063v1
https://arxiv.org/pdf/2506.15063v1.pdf
null
[ "Kiho Yoon" ]
[]
2025-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/smart-contracts-and-reaction-function-games
2506.14413
null
null
Smart contracts and reaction-function games
Blockchain-based smart contracts offer a new take on credible commitment, where players can commit to actions in reaction to actions of others. Such reaction-function games extend on strategic games with players choosing reaction functions instead of strategies. We formalize a solution concept in terms of fixed points for such games, akin to Nash equilibrium, and prove equilibrium existence. Reaction functions can mimic "trigger" strategies from folk theorems on infinitely repeated games -- but now in a one-shot setting. We introduce a refinement in terms of safe play. We apply our theoretical framework to symmetric investment games, which includes two prominent classes of games, namely weakest-link and public-good games. In both cases, we identify a safe and optimal reaction function. In this way, our findings highlight how blockchain-based commitment can overcome trust and free-riding barriers.
null
https://arxiv.org/abs/2506.14413v1
https://arxiv.org/pdf/2506.14413v1.pdf
null
[ "Jens Gudmundsson", "Jens Leth Hougaard" ]
[]
2025-06-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/who-and-how-adverse-selection-and-flexible
2506.12979
null
null
Who and How? Adverse Selection and flexible Moral Hazard
We characterize the set of incentive compatible mechanisms in problems with hidden productivity types and flexible hidden actions. We demonstrate the tractability of the characterization with applications.
null
https://arxiv.org/abs/2506.12979v1
https://arxiv.org/pdf/2506.12979v1.pdf
null
[ "Henrique Castro-Pires", "Deniz Kattwinkel", "Jan Knoepfle" ]
[]
2025-06-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/selling-certification-content-moderation-and
2506.12604
null
null
Selling Certification, Content Moderation, and Attention
We introduce a model of content moderation for sale, where a platform can channel attention in two ways: direct steering that makes content visible to consumers and certification that controls what consumers know about the content. The platform optimally price discriminates using both instruments. Content from higher willingness-to-pay providers enjoys higher quality certification and more views. The platform cross-subsidizes content: the same certificate is assigned to content from low willingness-to-pay providers that appeals to consumers and content from higher willingness-to-pay providers that does not. Cross-subsidization can benefit consumers by making content more diverse; regulation enforcing accurate certification may be harmful.
null
https://arxiv.org/abs/2506.12604v1
https://arxiv.org/pdf/2506.12604v1.pdf
null
[ "Heski Bar-Isaac", "Rahul Deb", "Matthew Mitchell" ]
[]
2025-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/who-s-in-household-targeted-government
2506.12575
null
null
Who's in? Household-targeted Government Policies and the Role of Financial Literacy in Market Participation
This paper examines how household-targeted government policies influence financial market participation conditional on financial literacy, focusing on potential Central Bank Digital Currency (CBDC) adoption. Due to the lack of empirical CBDC data, I use the introduction of retail Treasury bonds in Italy as a proxy to investigate how financial literacy affects households' likelihood to engage with the new instrument. Using the Bank of Italy's Survey on Household Income and Wealth, I explore how financial literacy influenced households' participation in the Treasury bond market following the 2012 introduction of retail Treasury bonds, showing that households with some but low financial literacy are more likely to participate than other household groups. Based on these findings, I develop a theoretical model to explore the potential implications of financial literacy for CBDC adoption, showing that low-literate households with limited access to risky assets allocate more wealth to CBDC, while high-literate households use risky assets to safeguard against income risk. These results highlight the role of financial literacy in shaping portfolio choices and CBDC adoption.
null
https://arxiv.org/abs/2506.12575v1
https://arxiv.org/pdf/2506.12575v1.pdf
null
[ "Maria Elena Filippin" ]
[]
2025-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/artificial-intelligence-in-team-dynamics-who
2506.12337
null
null
Artificial Intelligence in Team Dynamics: Who Gets Replaced and Why?
This study investigates the effects of artificial intelligence (AI) adoption in organizations. We ask: (1) How should a principal optimally deploy limited AI resources to replace workers in a team? (2) In a sequential workflow, which workers face the highest risk of AI replacement? (3) How does substitution with AI affect both the replaced and non-replaced workers' wages? We develop a sequential team production model in which a principal can use peer monitoring -- where each worker observes the effort of their predecessor -- to discipline team members. The principal may replace some workers with AI agents, whose actions are not subject to moral hazard. Our analysis yields four key results. First, the optimal AI strategy involves the stochastic use of AI to replace workers. Second, the principal replaces workers at the beginning and at the end of the workflow, but does not replace the middle worker, since this worker is crucial for sustaining the flow of information obtained by peer monitoring. Third, the principal may choose not to fully exhaust the AI capacity at her discretion. Fourth, the optimal AI adoption increases average wages and reduces intra-team wage inequality.
null
https://arxiv.org/abs/2506.12337v1
https://arxiv.org/pdf/2506.12337v1.pdf
null
[ "Xienan Cheng", "Mustafa Dogan", "Pinar Yildirim" ]
[]
2025-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/nondistortionary-belief-elicitation
2506.12167
null
null
Nondistortionary belief elicitation
A researcher wants to ask a decision-maker about a belief related to a choice the decision-maker made; examples include eliciting confidence or cognitive uncertainty. When can the researcher provide incentives for the decision-maker to report her belief truthfully without distorting her choice? We identify necessary and sufficient conditions for nondistortionary elicitation and fully characterize all incentivizable questions in three canonical classes of problems. For these problems, we show how to elicit beliefs using variants of the Becker-DeGroot-Marschak mechanism.
null
https://arxiv.org/abs/2506.12167v1
https://arxiv.org/pdf/2506.12167v1.pdf
null
[ "Marcin Pęski", "Colin Stewart" ]
[]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/what-pareto-efficiency-adjustments-cannot-fix
2506.11660
null
null
What Pareto-Efficiency Adjustments Cannot Fix
The Deferred Acceptance (DA) algorithm is stable and strategy-proof, but can produce outcomes that are Pareto-inefficient for students, and thus several alternative mechanisms have been proposed to correct this inefficiency. However, we show that these mechanisms cannot correct DA's rank-inefficiency and inequality, because these shortcomings can arise even in cases where DA is Pareto-efficient. We also examine students' segregation in settings with advantaged and marginalized students. We prove that the demographic composition of every school is perfectly preserved under any Pareto-efficient mechanism that dominates DA, and consequently fully segregated schools under DA maintain their extreme homogeneity.
null
https://arxiv.org/abs/2506.11660v1
https://arxiv.org/pdf/2506.11660v1.pdf
null
[ "Josue Ortega", "Gabriel Ziegler", "R. Pablo Arribillaga", "Geng Zhao" ]
[]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/rethinking-competition-as-a-non-beneficial
2506.11405
null
null
Rethinking Competition as a Non-Beneficial Mechanism in Economic Systems
Persistent economic competition is often justified as a mechanism of innovation, efficiency, and welfare maximization. Yet empirical evidence across disciplines reveals that competition systematically generates fragility, inequality, and ecological degradation, emergent outcomes not of isolated failures but of underlying systemic dynamics. This work reconceptualizes economic ecosystems as real complex adaptive systems, structurally isomorphic with biological and social ecosystems. Integrating complexity science, evolutionary biology, ecology, and economic and business theory, we classify economic interactions according to their systemic effects and propose a theoretical model of ecosystemic equilibrium based on the predominance of beneficial versus non-beneficial relationships. Recognizing economies as ecologically embedded and structurally interdependent systems provides a novel framework for analyzing systemic resilience, reframing competition as a non-beneficial mechanism.
null
https://arxiv.org/abs/2506.11405v1
https://arxiv.org/pdf/2506.11405v1.pdf
null
[ "Marcelo S. Tedesco", "Gonzalo Marquez" ]
[]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/performance-improvement-of-spatial-semantic
2506.21174
null
null
Performance improvement of spatial semantic segmentation with enriched audio features and agent-based error correction for DCASE 2025 Challenge Task 4
This technical report presents submission systems for Task 4 of the DCASE 2025 Challenge. This model incorporates additional audio features (spectral roll-off and chroma features) into the embedding feature extracted from the mel-spectral feature to im-prove the classification capabilities of an audio-tagging model in the spatial semantic segmentation of sound scenes (S5) system. This approach is motivated by the fact that mixed audio often contains subtle cues that are difficult to capture with mel-spectrograms alone. Thus, these additional features offer alterna-tive perspectives for the model. Second, an agent-based label correction system is applied to the outputs processed by the S5 system. This system reduces false positives, improving the final class-aware signal-to-distortion ratio improvement (CA-SDRi) metric. Finally, we refine the training dataset to enhance the classi-fication accuracy of low-performing classes by removing irrele-vant samples and incorporating external data. That is, audio mix-tures are generated from a limited number of data points; thus, even a small number of out-of-class data points could degrade model performance. The experiments demonstrate that the submit-ted systems employing these approaches relatively improve CA-SDRi by up to 14.7% compared to the baseline of DCASE 2025 Challenge Task 4.
null
https://arxiv.org/abs/2506.21174v1
https://arxiv.org/pdf/2506.21174v1.pdf
null
[ "Jongyeon Park", "Joonhee Lee", "Do-Hyeon Lim", "Hong Kook Kim", "Hyeongcheol Geum", "Jeong Eun Lim" ]
[ "Audio Tagging", "Semantic Segmentation" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/post-training-for-deepfake-speech-detection
2506.21090
null
null
Post-training for Deepfake Speech Detection
We introduce a post-training approach that adapts self-supervised learning (SSL) models for deepfake speech detection by bridging the gap between general pre-training and domain-specific fine-tuning. We present AntiDeepfake models, a series of post-trained models developed using a large-scale multilingual speech dataset containing over 56,000 hours of genuine speech and 18,000 hours of speech with various artifacts in over one hundred languages. Experimental results show that the post-trained models already exhibit strong robustness and generalization to unseen deepfake speech. When they are further fine-tuned on the Deepfake-Eval-2024 dataset, these models consistently surpass existing state-of-the-art detectors that do not leverage post-training. Model checkpoints and source code are available online.
We introduce a post-training approach that adapts self-supervised learning (SSL) models for deepfake speech detection by bridging the gap between general pre-training and domain-specific fine-tuning.
https://arxiv.org/abs/2506.21090v1
https://arxiv.org/pdf/2506.21090v1.pdf
null
[ "Wanying Ge", "Xin Wang", "Xuechen Liu", "Junichi Yamagishi" ]
[ "Face Swapping", "Self-Supervised Learning" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/improved-topology-independent-distributed
2506.20001
null
null
Improved Topology-Independent Distributed Adaptive Node-Specific Signal Estimation for Wireless Acoustic Sensor Networks
This paper addresses the challenge of topology-independent (TI) distributed adaptive node-specific signal estimation (DANSE) in wireless acoustic sensor networks (WASNs) where sensor nodes exchange only fused versions of their local signals. An algorithm named TI-DANSE has previously been presented to handle non-fully connected WASNs. However, its slow iterative convergence towards the optimal solution limits its applicability. To address this, we propose in this paper the TI-DANSE+ algorithm. At each iteration in TI-DANSE+, the node set to update its local parameters is allowed to exploit each individual partial in-network sums transmitted by its neighbors in its local estimation problem, increasing the available degrees of freedom and accelerating convergence with respect to TI-DANSE. Additionally, a tree-pruning strategy is proposed to further increase convergence speed. TI-DANSE+ converges as fast as the DANSE algorithm in fully connected WASNs while reducing transmit power usage. The convergence properties of TI-DANSE+ are demonstrated in numerical simulations.
This paper addresses the challenge of topology-independent (TI) distributed adaptive node-specific signal estimation (DANSE) in wireless acoustic sensor networks (WASNs) where sensor nodes exchange only fused versions of their local signals.
https://arxiv.org/abs/2506.20001v1
https://arxiv.org/pdf/2506.20001v1.pdf
null
[ "Paul Didier", "Toon van Waterschoot", "Simon Doclo", "Jörg Bitzer", "Marc Moonen" ]
[]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/jcapt-a-joint-modeling-approach-for-capt
2506.19315
null
null
JCAPT: A Joint Modeling Approach for CAPT
Effective pronunciation feedback is critical in second language (L2) learning, for which computer-assisted pronunciation training (CAPT) systems often encompass two key tasks: automatic pronunciation assessment (APA) and mispronunciation detection and diagnosis (MDD). Recent work has shown that joint modeling of these two tasks can yield mutual benefits. Our unified framework leverages Mamba, a selective state space model (SSM), while integrating phonological features and think token strategies to jointly enhance interpretability and fine-grained temporal reasoning in APA and MDD. To our knowledge, this is the first study to combine phonological attribution, SSM-based modeling, and prompting in CAPT. A series of experiments conducted on the speechocean762 benchmark demonstrate that our model consistently outperforms prior methods, particularly on the MDD task.
null
https://arxiv.org/abs/2506.19315v1
https://arxiv.org/pdf/2506.19315v1.pdf
null
[ "Tzu-Hsuan Yang", "Yue-Yang He", "Berlin Chen" ]
[ "Mamba" ]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Adaptive Pseudo Augmentation", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Image Data Augmentation** refers to a class of methods that augment an image dataset to increase the effective size of the training set, or as a form of regularization to help the network learn more effective representations.", "name": "Image Data Augmentation", "parent": null }, "name": "APA", "source_title": "Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data", "source_url": "https://arxiv.org/abs/2111.06849v1" }, { "code_snippet_url": "https://github.com/state-spaces/mamba", "description": "Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers’ computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5× higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pre-training and downstream evaluation.", "full_name": "Mamba: Linear-Time Sequence Modeling with Selective State Spaces", "introduced_year": 2000, "main_collection": null, "name": "Mamba", "source_title": "Mamba: Linear-Time Sequence Modeling with Selective State Spaces", "source_url": "https://arxiv.org/abs/2312.00752v2" } ]
https://paperswithcode.com/paper/an-audio-centric-multi-task-learning
2506.18735
null
null
An Audio-centric Multi-task Learning Framework for Streaming Ads Targeting on Spotify
Spotify, a large-scale multimedia platform, attracts over 675 million monthly active users who collectively consume millions of hours of music, podcasts, audiobooks, and video content. This diverse content consumption pattern introduces unique challenges for computational advertising, which must effectively integrate a variety of ad modalities, including audio, video, and display, within a single user experience. Traditional ad recommendation models, primarily designed for foregrounded experiences, often struggle to reconcile the platform's inherent audio-centrality with the demands of optimizing ad performance across multiple formats and modalities. To overcome these challenges, we introduce Cross-modal Adaptive Mixture-of-Experts (CAMoE), a novel framework for optimizing click-through rate (CTR) prediction in both audio-centric and multi-modal settings. CAMoE enhances traditional mixture-of-experts models by incorporating modality-aware task grouping, adaptive loss masking, and deep-cross networks (DCN) to capture complex feature interactions within a multi-modal ad ecosystem. Through extensive ablation studies, we demonstrate that this approach achieves near Pareto-optimal performance across audio, video, and display ad formats, significantly improving AUC-PR compared to conventional single-task and content-based multi-task learning baselines. When deployed at scale on Spotify's ad serving platform, CAMoE delivered substantial gains, yielding a 14.5% increase in CTR for audio ads, a 1.3% increase for video ads, and a 4.8% reduction in expected cost-per-click (eCPC) for audio slots.
null
https://arxiv.org/abs/2506.18735v1
https://arxiv.org/pdf/2506.18735v1.pdf
null
[ "Shivam Verma", "Vivian Chen", "Darren Mei" ]
[ "Click-Through Rate Prediction", "Mixture-of-Experts", "Multi-Task Learning" ]
2025-06-23T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.", "full_name": "Linear Warmup With Linear Decay", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.", "name": "Learning Rate Schedules", "parent": null }, "name": "Linear Warmup With Linear Decay", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!\r\n\r\n\r\n“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!", "full_name": "Refunds@Expedia|||How do I get a full refund from Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "Refunds@Expedia|||How do I get a full refund from Expedia?", "source_title": "Gaussian Error Linear Units (GELUs)", "source_url": "https://arxiv.org/abs/1606.08415v5" }, { "code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271", "description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$", "full_name": "Attention Dropout", "introduced_year": 2018, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Attention Dropout", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "https://github.com/google-research/bert", "description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.", "full_name": "BERT", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n", "name": "Language Models", "parent": null }, "name": "BERT", "source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "source_url": "https://arxiv.org/abs/1810.04805v2" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/google-research/vision_transformer", "description": "The **Vision Transformer**, or **ViT**, is a model for image classification that employs a [Transformer](https://paperswithcode.com/method/transformer)-like architecture over patches of the image. An image is split into fixed-size patches, each of them are then linearly embedded, position embeddings are added, and the resulting sequence of vectors is fed to a standard [Transformer](https://paperswithcode.com/method/transformer) encoder. In order to perform classification, the standard approach of adding an extra learnable “classification token” to the sequence is used.", "full_name": "Vision Transformer", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.", "name": "Image Models", "parent": null }, "name": "Vision Transformer", "source_title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "source_url": "https://arxiv.org/abs/2010.11929v2" }, { "code_snippet_url": null, "description": "**CAMoE** is a multi-stream Corpus Alignment network with single gate Mixture-of-Experts (MoE) for video-text retrieval. The CAMoE employs Mixture-of-Experts (MoE) to extract multi-perspective video representations, including action, entity, scene, etc., then align them with the corresponding part of the text. A [Dual Softmax Loss](https://paperswithcode.com/method/dual-softmax-loss) (DSL) is used to avoid the one-way optimum-match which occurs in previous contrastive methods. Introducing the intrinsic prior of each pair in a batch, DSL serves as a reviser to correct the similarity matrix and achieves the dual optimal match.", "full_name": "CAMoE", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Video-Text Retrieval Models", "parent": null }, "name": "CAMoE", "source_title": "Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss", "source_url": "https://arxiv.org/abs/2109.04290v3" } ]
https://paperswithcode.com/paper/efficient-and-generalizable-speaker
2506.18623
null
null
Efficient and Generalizable Speaker Diarization via Structured Pruning of Self-Supervised Models
Self-supervised learning (SSL) models such as WavLM have brought substantial improvements to speaker diarization by providing rich contextual representations. However, the high computational and memory costs of these models hinder their deployment in real-time and resource-constrained scenarios. In this work, we present a comprehensive study on compressing SSL-based diarization models through structured pruning guided by knowledge distillation. Building upon our previous work, we extend the analysis to include pruning objectives based on multiply-accumulate operations (MACs), investigate module-wise and progressive pruning strategies, and examine the impact of training data quantity. Experimental results show that our method reduces model size by up to 80% without degrading performance, achieving up to 4x faster inference on a single GPU. We further perform large-scale evaluations on a diverse compound dataset comprising eight public diarization corpora, where our best pruned model achieves state-of-the-art performance across most conditions. Additionally, we show strong generalization to the CHiME-6 dataset, attaining performance comparable to the third-place system in the CHiME-7 challenge without any domain adaptation. All models and code are publicly released to support reproducibility and future research.
However, the high computational and memory costs of these models hinder their deployment in real-time and resource-constrained scenarios.
https://arxiv.org/abs/2506.18623v1
https://arxiv.org/pdf/2506.18623v1.pdf
null
[ "Jiangyu Han", "Petr Pálka", "Marc Delcroix", "Federico Landini", "Johan Rohdin", "Jan Cernocký", "Lukáš Burget" ]
[ "Domain Adaptation", "GPU", "Knowledge Distillation", "Self-Supervised Learning", "speaker-diarization", "Speaker Diarization" ]
2025-06-23T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Pruning", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Model Compression", "parent": null }, "name": "Pruning", "source_title": "Pruning Filters for Efficient ConvNets", "source_url": "http://arxiv.org/abs/1608.08710v3" } ]
https://paperswithcode.com/paper/fully-few-shot-class-incremental-audio-1
2506.18406
null
null
Fully Few-shot Class-incremental Audio Classification Using Multi-level Embedding Extractor and Ridge Regression Classifier
In the task of Few-shot Class-incremental Audio Classification (FCAC), training samples of each base class are required to be abundant to train model. However, it is not easy to collect abundant training samples for many base classes due to data scarcity and high collection cost. We discuss a more realistic issue, Fully FCAC (FFCAC), in which training samples of both base and incremental classes are only a few. Furthermore, we propose a FFCAC method using a model which is decoupled into a multi-level embedding extractor and a ridge regression classifier. The embedding extractor consists of an encoder of audio spectrogram Transformer and a fusion module, and is trained in the base session but frozen in all incremental sessions. The classifier is updated continually in each incremental session. Results on three public datasets show that our method exceeds current methods in accuracy, and has advantage over most of them in complexity. The code is at https://github.com/YongjieSi/MAR.
In the task of Few-shot Class-incremental Audio Classification (FCAC), training samples of each base class are required to be abundant to train model.
https://arxiv.org/abs/2506.18406v1
https://arxiv.org/pdf/2506.18406v1.pdf
null
[ "Yongjie Si", "Yanxiong Li", "Jiaxin Tan", "Qianhua He", "Il-Youp Kwak" ]
[ "Audio Classification" ]
2025-06-23T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": null, "description": "", "full_name": "Balanced Selection", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Active Learning", "parent": null }, "name": "BASE", "source_title": "Active Learning at the ImageNet Scale", "source_url": "https://arxiv.org/abs/2111.12880v1" } ]
https://paperswithcode.com/paper/infant-cry-emotion-recognition-using-improved
2506.18402
null
null
Infant Cry Emotion Recognition Using Improved ECAPA-TDNN with Multiscale Feature Fusion and Attention Enhancement
Infant cry emotion recognition is crucial for parenting and medical applications. It faces many challenges, such as subtle emotional variations, noise interference, and limited data. The existing methods lack the ability to effectively integrate multi-scale features and temporal-frequency relationships. In this study, we propose a method for infant cry emotion recognition using an improved Emphasized Channel Attention, Propagation and Aggregation in Time Delay Neural Network (ECAPA-TDNN) with both multi-scale feature fusion and attention enhancement. Experiments on a public dataset show that the proposed method achieves accuracy of 82.20%, number of parameters of 1.43 MB and FLOPs of 0.32 Giga. Moreover, our method has advantage over the baseline methods in terms of accuracy. The code is at https://github.com/kkpretend/IETMA.
Infant cry emotion recognition is crucial for parenting and medical applications.
https://arxiv.org/abs/2506.18402v1
https://arxiv.org/pdf/2506.18402v1.pdf
null
[ "Junyu Zhou", "Yanxiong Li", "Haolin Yu" ]
[ "Emotion Recognition" ]
2025-06-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/low-resource-keyword-spotting-using
2506.17690
null
null
Low-resource keyword spotting using contrastively trained transformer acoustic word embeddings
We introduce a new approach, the ContrastiveTransformer, that produces acoustic word embeddings (AWEs) for the purpose of very low-resource keyword spotting. The ContrastiveTransformer, an encoder-only model, directly optimises the embedding space using normalised temperature-scaled cross entropy (NT-Xent) loss. We use this model to perform keyword spotting for radio broadcasts in Luganda and Bambara, the latter a severely under-resourced language. We compare our model to various existing AWE approaches, including those constructed from large pre-trained self-supervised models, a recurrent encoder which previously used the NT-Xent loss, and a DTW baseline. We demonstrate that the proposed contrastive transformer approach offers performance improvements over all considered existing approaches to very low-resource keyword spotting in both languages.
null
https://arxiv.org/abs/2506.17690v1
https://arxiv.org/pdf/2506.17690v1.pdf
null
[ "Julian Herreilers", "Christiaan Jacobs", "Thomas Niesler" ]
[ "Keyword Spotting", "Word Embeddings" ]
2025-06-21T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google-research/simclr/blob/bfe07eed7f101ab51f3360100a28690e1bfbf6ec/objective.py#L38", "description": "**NT-Xent**, or **Normalized Temperature-scaled Cross Entropy Loss**, is a loss function. Let $\\text{sim}\\left(\\mathbf{u}, \\mathbf{v}\\right) = \\mathbf{u}^{T}\\mathbf{v}/||\\mathbf{u}|| ||\\mathbf{v}||$ denote the cosine similarity between two vectors $\\mathbf{u}$ and $\\mathbf{v}$. Then the loss function for a positive pair of examples $\\left(i, j\\right)$ is :\r\n\r\n$$ \\mathbb{l}\\_{i,j} = -\\log\\frac{\\exp\\left(\\text{sim}\\left(\\mathbf{z}\\_{i}, \\mathbf{z}\\_{j}\\right)/\\tau\\right)}{\\sum^{2N}\\_{k=1}\\mathcal{1}\\_{[k\\neq{i}]}\\exp\\left(\\text{sim}\\left(\\mathbf{z}\\_{i}, \\mathbf{z}\\_{k}\\right)/\\tau\\right)}$$\r\n\r\nwhere $\\mathcal{1}\\_{[k\\neq{i}]} \\in ${$0, 1$} is an indicator function evaluating to $1$ iff $k\\neq{i}$ and $\\tau$ denotes a temperature parameter. The final loss is computed across all positive pairs, both $\\left(i, j\\right)$ and $\\left(j, i\\right)$, in a mini-batch.\r\n\r\nSource: [SimCLR](https://paperswithcode.com/method/simclr)", "full_name": "Normalized Temperature-scaled Cross Entropy Loss", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.", "name": "Loss Functions", "parent": null }, "name": "NT-Xent", "source_title": "Improved Deep Metric Learning with Multi-class N-pair Loss Objective", "source_url": "http://papers.nips.cc/paper/6200-improved-deep-metric-learning-with-multi-class-n-pair-loss-objective" }, { "code_snippet_url": "https://dynamictimewarping.github.io/", "description": "Dynamic Time Warping (DTW) [1] is one of well-known distance measures between a pairwise of time series. The main idea of DTW is to compute the distance from the matching of similar elements between time series. It uses the dynamic programming technique to find the optimal temporal matching between elements of two time series.\r\n\r\nFor instance, similarities in walking could be detected using DTW, even if one person was walking faster than the other, or if there were accelerations and decelerations during the course of an observation. DTW has been applied to temporal sequences of video, audio, and graphics data — indeed, any data that can be turned into a linear sequence can be analyzed with DTW. A well known application has been automatic speech recognition, to cope with different speaking speeds. Other applications include speaker recognition and online signature recognition. It can also be used in partial shape matching application.\r\n\r\nIn general, DTW is a method that calculates an optimal match between two given sequences (e.g. time series) with certain restriction and rules:\r\n\r\n1. Every index from the first sequence must be matched with one or more indices from the other sequence, and vice versa\r\n2. The first index from the first sequence must be matched with the first index from the other sequence (but it does not have to be its only match)\r\n3. The last index from the first sequence must be matched with the last index from the other sequence (but it does not have to be its only match)\r\n4. The mapping of the indices from the first sequence to indices from the other sequence must be monotonically increasing, and vice versa, i.e. if j>i are indices from the first sequence, then there must not be two indices l>k in the other sequence, such that index i is matched with index l and index j is matched with index k, and vice versa.\r\n\r\n[1] Sakoe, Hiroaki, and Seibi Chiba. \"Dynamic programming algorithm optimization for spoken word recognition.\" IEEE transactions on acoustics, speech, and signal processing 26, no. 1 (1978): 43-49.", "full_name": "Dynamic Time Warping", "introduced_year": 2000, "main_collection": { "area": "Sequential", "description": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data.", "name": "Time Series Analysis", "parent": null }, "name": "DTW", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/h-quest-accelerating-query-by-example-spoken
2506.16751
null
null
H-QuEST: Accelerating Query-by-Example Spoken Term Detection with Hierarchical Indexing
Query-by-example spoken term detection (QbE-STD) searches for matching words or phrases in an audio dataset using a sample spoken query. When annotated data is limited or unavailable, QbE-STD is often done using template matching methods like dynamic time warping (DTW), which are computationally expensive and do not scale well. To address this, we propose H-QuEST (Hierarchical Query-by-Example Spoken Term Detection), a novel framework that accelerates spoken term retrieval by utilizing Term Frequency and Inverse Document Frequency (TF-IDF)-based sparse representations obtained through advanced audio representation learning techniques and Hierarchical Navigable Small World (HNSW) indexing with further refinement. Experimental results show that H-QuEST delivers substantial improvements in retrieval speed without sacrificing accuracy compared to existing methods.
null
https://arxiv.org/abs/2506.16751v1
https://arxiv.org/pdf/2506.16751v1.pdf
null
[ "Akanksha Singh", "Yi-Ping Phoebe Chen", "Vipul Arora" ]
[ "Dynamic Time Warping", "Representation Learning", "Retrieval", "Template Matching" ]
2025-06-20T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/rapflow-tts-rapid-and-high-fidelity-text-to
2506.16741
null
null
RapFlow-TTS: Rapid and High-Fidelity Text-to-Speech with Improved Consistency Flow Matching
We introduce RapFlow-TTS, a rapid and high-fidelity TTS acoustic model that leverages velocity consistency constraints in flow matching (FM) training. Although ordinary differential equation (ODE)-based TTS generation achieves natural-quality speech, it typically requires a large number of generation steps, resulting in a trade-off between quality and inference speed. To address this challenge, RapFlow-TTS enforces consistency in the velocity field along the FM-straightened ODE trajectory, enabling consistent synthetic quality with fewer generation steps. Additionally, we introduce techniques such as time interval scheduling and adversarial learning to further enhance the quality of the few-step synthesis. Experimental results show that RapFlow-TTS achieves high-fidelity speech synthesis with a 5- and 10-fold reduction in synthesis steps than the conventional FM- and score-based approaches, respectively.
We introduce RapFlow-TTS, a rapid and high-fidelity TTS acoustic model that leverages velocity consistency constraints in flow matching (FM) training.
https://arxiv.org/abs/2506.16741v1
https://arxiv.org/pdf/2506.16741v1.pdf
null
[ "Hyun Joon Park", "Jeongmin Liu", "Jin Sob Kim", "Jeong Yeol Yang", "Sung Won Han", "Eunwoo Song" ]
[ "Scheduling", "Speech Synthesis", "text-to-speech", "Text to Speech" ]
2025-06-20T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/end-to-end-speech-translation-for-low
2506.16251
null
null
End-to-End Speech Translation for Low-Resource Languages Using Weakly Labeled Data
The scarcity of high-quality annotated data presents a significant challenge in developing effective end-to-end speech-to-text translation (ST) systems, particularly for low-resource languages. This paper explores the hypothesis that weakly labeled data can be used to build ST models for low-resource language pairs. We constructed speech-to-text translation datasets with the help of bitext mining using state-of-the-art sentence encoders. We mined the multilingual Shrutilipi corpus to build Shrutilipi-anuvaad, a dataset comprising ST data for language pairs Bengali-Hindi, Malayalam-Hindi, Odia-Hindi, and Telugu-Hindi. We created multiple versions of training data with varying degrees of quality and quantity to investigate the effect of quality versus quantity of weakly labeled data on ST model performance. Results demonstrate that ST systems can be built using weakly labeled data, with performance comparable to massive multi-modal multilingual baselines such as SONAR and SeamlessM4T.
null
https://arxiv.org/abs/2506.16251v1
https://arxiv.org/pdf/2506.16251v1.pdf
null
[ "Aishwarya Pothula", "Bhavana Akkiraju", "Srihari Bandarupalli", "Charan D", "Santosh Kesiraju", "Anil Kumar Vuppala" ]
[ "Sentence", "Speech-to-Text", "Speech-to-Text Translation", "Translation" ]
2025-06-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/thinking-in-directivity-speech-large-language
2506.14973
null
null
Thinking in Directivity: Speech Large Language Model for Multi-Talker Directional Speech Recognition
Recent studies have demonstrated that prompting large language models (LLM) with audio encodings enables effective speech recognition capabilities. However, the ability of Speech LLMs to comprehend and process multi-channel audio with spatial cues remains a relatively uninvestigated area of research. In this work, we present directional-SpeechLlama, a novel approach that leverages the microphone array of smart glasses to achieve directional speech recognition, source localization, and bystander cross-talk suppression. To enhance the model's ability to understand directivity, we propose two key techniques: serialized directional output training (S-DOT) and contrastive direction data augmentation (CDDA). Experimental results show that our proposed directional-SpeechLlama effectively captures the relationship between textual cues and spatial audio, yielding strong performance in both speech recognition and source localization tasks.
null
https://arxiv.org/abs/2506.14973v1
https://arxiv.org/pdf/2506.14973v1.pdf
null
[ "Jiamin Xie", "Ju Lin", "Yiteng Huang", "Tyler Vuong", "Zhaojiang Lin", "Zhaojun Yang", "Peng Su", "Prashant Rawat", "Sangeeta Srivastava", "Ming Sun", "Florian Metze" ]
[ "Data Augmentation", "Language Modeling", "Language Modelling", "Large Language Model", "speech-recognition", "Speech Recognition" ]
2025-06-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ntu-speechlab-llm-based-multilingual-asr
2506.13339
null
null
NTU Speechlab LLM-Based Multilingual ASR System for Interspeech MLC-SLM Challenge 2025
This report details the NTU Speechlab system developed for the Interspeech 2025 Multilingual Conversational Speech and Language Model (MLC-SLM) Challenge (Task I), where we achieved 5th place. We present comprehensive analyses of our multilingual automatic speech recognition system, highlighting key advancements in model architecture, data selection, and training strategies. In particular, language-specific prompts and model averaging techniques were instrumental in boosting system performance across diverse languages. Compared to the initial baseline system, our final model reduced the average Mix Error Rate from 20.2% to 10.6%, representing an absolute improvement of 9.6% (a relative improvement of 48%) on the evaluation set. Our results demonstrate the effectiveness of our approach and offer practical insights for future Speech Large Language Models.
null
https://arxiv.org/abs/2506.13339v1
https://arxiv.org/pdf/2506.13339v1.pdf
null
[ "Yizhou Peng", "Bin Wang", "Yi-Wen Chao", "Ziyang Ma", "Haoyang Zhang", "Hexin Liu", "Xie Chen", "Eng Siong Chng" ]
[ "Automatic Speech Recognition", "Language Modeling", "Language Modelling", "speech-recognition", "Speech Recognition" ]
2025-06-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/speech-language-models-with-decoupled
2506.12537
null
null
Speech-Language Models with Decoupled Tokenizers and Multi-Token Prediction
Speech-language models (SLMs) offer a promising path toward unifying speech and text understanding and generation. However, challenges remain in achieving effective cross-modal alignment and high-quality speech generation. In this work, we systematically investigate the impact of key components (i.e., speech tokenizers, speech heads, and speaker modeling) on the performance of LLM-centric SLMs. We compare coupled, semi-decoupled, and fully decoupled speech tokenizers under a fair SLM framework and find that decoupled tokenization significantly improves alignment and synthesis quality. To address the information density mismatch between speech and text, we introduce multi-token prediction (MTP) into SLMs, enabling each hidden state to decode multiple speech tokens. This leads to up to 12$\times$ faster decoding and a substantial drop in word error rate (from 6.07 to 3.01). Furthermore, we propose a speaker-aware generation paradigm and introduce RoleTriviaQA, a large-scale role-playing knowledge QA benchmark with diverse speaker identities. Experiments demonstrate that our methods enhance both knowledge understanding and speaker consistency.
null
https://arxiv.org/abs/2506.12537v1
https://arxiv.org/pdf/2506.12537v1.pdf
null
[ "Xiaoran Fan", "Zhichao Sun", "Yangfan Gao", "Jingfei Xiong", "Hang Yan", "Yifei Cao", "Jiajun Sun", "Shuo Li", "Zhihao Zhang", "Zhiheng Xi", "Yuhao Zhou", "Senjie Jin", "Changhao Jiang", "Junjie Ye", "Ming Zhang", "Rui Zheng", "Zhenhua Han", "Yunke Zhang", "Demei Yan", "Shaokang Dong", "Tao Ji", "Tao Gui", "Qi Zhang", "Xuanjing Huang" ]
[ "cross-modal alignment" ]
2025-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/tracking-of-spatially-dynamic-room-impulse
2506.11703
null
null
Tracking of Spatially Dynamic Room Impulse Responses Along Locally Linearized Trajectories
Measuring room impulse responses (RIRs) at multiple spatial points is a time-consuming task, while simulations require detailed knowledge of the room's acoustic environment. In prior work, we proposed a method for estimating the early part of RIRs along a linear trajectory in a time-varying acoustic scenario involving a static sound source and a microphone moving at constant velocity. This approach relies on measured RIRs at the start and end points of the trajectory and assumes that the time intervals occupied by the direct sound and individual reflections along the trajectory are non-overlapping. The method's applicability is therefore restricted to relatively small areas within a room, and its performance has yet to be validated with real-world data. In this paper, we propose a practical extension of the method to more realistic scenarios by segmenting longer trajectories into smaller linear intervals where the assumptions approximately hold. Applying the method piecewise along these segments extends its applicability to more complex room environments. We demonstrate its effectiveness using the trajectoRIR database, which includes moving microphone recordings and RIR measurements at discrete points along a controlled L-shaped trajectory in a real room.
null
https://arxiv.org/abs/2506.11703v1
https://arxiv.org/pdf/2506.11703v1.pdf
null
[ "Kathleen MacWilliam", "Thomas Dietzen", "Toon van Waterschoot" ]
[]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/lightweight-and-robust-multi-channel-end-to
2506.11630
null
null
Lightweight and Robust Multi-Channel End-to-End Speech Recognition with Spherical Harmonic Transform
This paper presents SHTNet, a lightweight spherical harmonic transform (SHT) based framework, which is designed to address cross-array generalization challenges in multi-channel automatic speech recognition (ASR) through three key innovations. First, SHT based spatial sound field decomposition converts microphone signals into geometry-invariant spherical harmonic coefficients, isolating signal processing from array geometry. Second, the Spatio-Spectral Attention Fusion Network (SSAFN) combines coordinate-aware spatial modeling, refined self-attention channel combinator, and spectral noise suppression without conventional beamforming. Third, Rand-SHT training enhances robustness through random channel selection and array geometry reconstruction. The system achieves 39.26\% average CER across heterogeneous arrays (e.g., circular, square, and binaural) on datasets including Aishell-4, Alimeeting, and XMOS, with 97.1\% fewer computations than conventional neural beamformers.
null
https://arxiv.org/abs/2506.11630v1
https://arxiv.org/pdf/2506.11630v1.pdf
null
[ "Xiangzhu Kong", "Huang Hao", "Zhijian Ou" ]
[ "Automatic Speech Recognition", "Automatic Speech Recognition (ASR)", "channel selection", "speech-recognition", "Speech Recognition" ]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/improved-in-car-sound-pick-up-using
2506.11157
null
null
Improved in-car sound pick-up using multichannel Wiener filter
With advancements in automotive electronics and sensors, the sound pick-up using multiple microphones has become feasible for hands-free telephony and voice command in-car applications. However, challenges remain in effectively processing multiple microphone signals due to bandwidth or processing limitations. This work explores the use of the Multichannel Wiener Filter algorithm with a two-microphone in-car system, to enhance speech quality for driver and passenger voice, i.e., to mitigate notch-filtering effects caused by echoes and improve background noise reduction. We evaluate its performance under various noise conditions using modern objective metrics like Deep Noise Suppression Mean Opinion Score. The effect of head movements of driver/passenger is also investigated. The proposed method is shown to provide significant improvements over a simple mixing of microphone signals.
null
https://arxiv.org/abs/2506.11157v1
https://arxiv.org/pdf/2506.11157v1.pdf
null
[ "Juhi Khalid", "Martin Bouchard" ]
[]
2025-06-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/customizing-speech-recognition-model-with
2506.11091
null
null
Customizing Speech Recognition Model with Large Language Model Feedback
Automatic speech recognition (ASR) systems have achieved strong performance on general transcription tasks. However, they continue to struggle with recognizing rare named entities and adapting to domain mismatches. In contrast, large language models (LLMs), trained on massive internet-scale datasets, are often more effective across a wide range of domains. In this work, we propose a reinforcement learning based approach for unsupervised domain adaptation, leveraging unlabeled data to enhance transcription quality, particularly the named entities affected by domain mismatch, through feedback from a LLM. Given contextual information, our framework employs a LLM as the reward model to score the hypotheses from the ASR model. These scores serve as reward signals to fine-tune the ASR model via reinforcement learning. Our method achieves a 21\% improvement on entity word error rate over conventional self-training methods.
null
https://arxiv.org/abs/2506.11091v1
https://arxiv.org/pdf/2506.11091v1.pdf
null
[ "Shaoshi Ling", "Guoli Ye" ]
[ "Automatic Speech Recognition", "Automatic Speech Recognition (ASR)", "Domain Adaptation", "Language Modeling", "Language Modelling", "Large Language Model", "model", "reinforcement-learning", "Reinforcement Learning", "speech-recognition", "Speech Recognition", "Unsupervised Domain Adaptation" ]
2025-06-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/better-pseudo-labeling-with-multi-asr-fusion
2506.11089
null
null
Better Pseudo-labeling with Multi-ASR Fusion and Error Correction by SpeechLLM
Automatic speech recognition (ASR) models rely on high-quality transcribed data for effective training. Generating pseudo-labels for large unlabeled audio datasets often relies on complex pipelines that combine multiple ASR outputs through multi-stage processing, leading to error propagation, information loss and disjoint optimization. We propose a unified multi-ASR prompt-driven framework using postprocessing by either textual or speech-based large language models (LLMs), replacing voting or other arbitration logic for reconciling the ensemble outputs. We perform a comparative study of multiple architectures with and without LLMs, showing significant improvements in transcription accuracy compared to traditional methods. Furthermore, we use the pseudo-labels generated by the various approaches to train semi-supervised ASR models for different datasets, again showing improved performance with textual and speechLLM transcriptions compared to baselines.
null
https://arxiv.org/abs/2506.11089v1
https://arxiv.org/pdf/2506.11089v1.pdf
null
[ "Jeena Prakash", "Blessingh Kumar", "Kadri Hacioglu", "Bidisha Sharma", "Sindhuja Gopalan", "Malolan Chetlur", "Shankar Venkatesan", "Andreas Stolcke" ]
[ "Automatic Speech Recognition", "Automatic Speech Recognition (ASR)", "speech-recognition", "Speech Recognition" ]
2025-06-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/intelligibility-of-text-to-speech-systems-for
2506.11086
null
null
Intelligibility of Text-to-Speech Systems for Mathematical Expressions
There has been limited evaluation of advanced Text-to-Speech (TTS) models with Mathematical eXpressions (MX) as inputs. In this work, we design experiments to evaluate quality and intelligibility of five TTS models through listening and transcribing tests for various categories of MX. We use two Large Language Models (LLMs) to generate English pronunciation from LaTeX MX as TTS models cannot process LaTeX directly. We use Mean Opinion Score from user ratings and quantify intelligibility through transcription correctness using three metrics. We also compare listener preference of TTS outputs with respect to human expert rendition of same MX. Results establish that output of TTS models for MX is not necessarily intelligible, the gap in intelligibility varies across TTS models and MX category. For most categories, performance of TTS models is significantly worse than that of expert rendition. The effect of choice of LLM is limited. This establishes the need to improve TTS models for MX.
null
https://arxiv.org/abs/2506.11086v1
https://arxiv.org/pdf/2506.11086v1.pdf
null
[ "Sujoy Roychowdhury", "H. G. Ranjani", "Sumit Soman", "Nishtha Paul", "Subhadip Bandyopadhyay", "Siddhanth Iyengar" ]
[ "text-to-speech", "Text to Speech" ]
2025-06-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/seamless-dysfluent-speech-text-alignment-for
2506.12073
null
null
Seamless Dysfluent Speech Text Alignment for Disordered Speech Analysis
Accurate alignment of dysfluent speech with intended text is crucial for automating the diagnosis of neurodegenerative speech disorders. Traditional methods often fail to model phoneme similarities effectively, limiting their performance. In this work, we propose Neural LCS, a novel approach for dysfluent text-text and speech-text alignment. Neural LCS addresses key challenges, including partial alignment and context-aware similarity mapping, by leveraging robust phoneme-level modeling. We evaluate our method on a large-scale simulated dataset, generated using advanced data simulation techniques, and real PPA data. Neural LCS significantly outperforms state-of-the-art models in both alignment accuracy and dysfluent speech segmentation. Our results demonstrate the potential of Neural LCS to enhance automated systems for diagnosing and analyzing speech disorders, offering a more accurate and linguistically grounded solution for dysfluent speech alignment.
null
https://arxiv.org/abs/2506.12073v1
https://arxiv.org/pdf/2506.12073v1.pdf
null
[ "Zongli Ye", "Jiachen Lian", "Xuanru Zhou", "Jinming Zhang", "Haodong Li", "Shuhe Li", "Chenxu Guo", "Anaisha Das", "Peter Park", "Zoe Ezzes", "Jet Vonk", "Brittany Morin", "Rian Bogley", "Lisa Wauters", "Zachary Miller", "Maria Gorno-Tempini", "Gopala Anumanchipalli" ]
[]
2025-06-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/improving-child-speech-recognition-and
2506.11079
null
null
Improving Child Speech Recognition and Reading Mistake Detection by Using Prompts
Automatic reading aloud evaluation can provide valuable support to teachers by enabling more efficient scoring of reading exercises. However, research on reading evaluation systems and applications remains limited. We present a novel multimodal approach that leverages audio and knowledge from text resources. In particular, we explored the potential of using Whisper and instruction-tuned large language models (LLMs) with prompts to improve transcriptions for child speech recognition, as well as their effectiveness in downstream reading mistake detection. Our results demonstrate the effectiveness of prompting Whisper and prompting LLM, compared to the baseline Whisper model without prompting. The best performing system achieved state-of-the-art recognition performance in Dutch child read speech, with a word error rate (WER) of 5.1%, improving the baseline WER of 9.4%. Furthermore, it significantly improved reading mistake detection, increasing the F1 score from 0.39 to 0.73.
null
https://arxiv.org/abs/2506.11079v1
https://arxiv.org/pdf/2506.11079v1.pdf
null
[ "Lingyun Gao", "Cristian Tejedor-Garcia", "Catia Cucchiarini", "Helmer Strik" ]
[ "Mistake Detection", "speech-recognition", "Speech Recognition" ]
2025-06-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/fifteen-years-of-child-centered-long-form
2506.11075
null
null
Fifteen Years of Child-Centered Long-Form Recordings: Promises, Resources, and Remaining Challenges to Validity
Audio-recordings collected with a child-worn device are a fundamental tool in child language research. Long-form recordings collected over whole days promise to capture children's input and production with minimal observer bias, and therefore high validity. The sheer volume of resulting data necessitates automated analysis to extract relevant metrics for researchers and clinicians. This paper summarizes collective knowledge on this technique, providing entry points to existing resources. We also highlight various sources of error that threaten the accuracy of automated annotations and the interpretation of resulting metrics. To address this, we propose potential troubleshooting metrics to help users assess data quality. While a fully automated quality control system is not feasible, we outline practical strategies for researchers to improve data collection and contextualize their analyses.
null
https://arxiv.org/abs/2506.11075v1
https://arxiv.org/pdf/2506.11075v1.pdf
null
[ "Loann Peurey", "Marvin Lavechin", "Tarek Kunze", "Manel Khentout", "Lucas Gautheron", "Emmanuel Dupoux", "Alejandrina Cristia" ]
[ "Form" ]
2025-06-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/challenges-in-automated-processing-of-speech
2506.11074
null
null
Challenges in Automated Processing of Speech from Child Wearables: The Case of Voice Type Classifier
Recordings gathered with child-worn devices promised to revolutionize both fundamental and applied speech sciences by allowing the effortless capture of children's naturalistic speech environment and language production. This promise hinges on speech technologies that can transform the sheer mounds of data thus collected into usable information. This paper demonstrates several obstacles blocking progress by summarizing three years' worth of experiments aimed at improving one fundamental task: Voice Type Classification. Our experiments suggest that improvements in representation features, architecture, and parameter search contribute to only marginal gains in performance. More progress is made by focusing on data relevance and quantity, which highlights the importance of collecting data with appropriate permissions to allow sharing.
null
https://arxiv.org/abs/2506.11074v1
https://arxiv.org/pdf/2506.11074v1.pdf
null
[ "Tarek Kunze", "Marianne Métais", "Hadrien Titeux", "Lucas Elbert", "Joseph Coffey", "Emmanuel Dupoux", "Alejandrina Cristia", "Marvin Lavechin" ]
[ "Blocking" ]
2025-06-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/can-we-trust-machine-learning-the-reliability
2506.11072
null
null
Can We Trust Machine Learning? The Reliability of Features from Open-Source Speech Analysis Tools for Speech Modeling
Machine learning-based behavioral models rely on features extracted from audio-visual recordings. The recordings are processed using open-source tools to extract speech features for classification models. These tools often lack validation to ensure reliability in capturing behaviorally relevant information. This gap raises concerns about reproducibility and fairness across diverse populations and contexts. Speech processing tools, when used outside of their design context, can fail to capture behavioral variations equitably and can then contribute to bias. We evaluate speech features extracted from two widely used speech analysis tools, OpenSMILE and Praat, to assess their reliability when considering adolescents with autism. We observed considerable variation in features across tools, which influenced model performance across context and demographic groups. We encourage domain-relevant verification to enhance the reliability of machine learning models in clinical applications.
null
https://arxiv.org/abs/2506.11072v1
https://arxiv.org/pdf/2506.11072v1.pdf
null
[ "Tahiya Chowdhury", "Veronica Romero" ]
[ "Fairness" ]
2025-06-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/embedded-acoustic-intelligence-for-automotive
2506.11071
null
null
Embedded Acoustic Intelligence for Automotive Systems
Transforming sound insights into actionable streams of data, this abstract leverages findings from degree thesis research to enhance automotive system intelligence, enabling us to address road type [1].By extracting and interpreting acoustic signatures from microphones installed within the wheelbase of a car, we focus on classifying road type.Utilizing deep neural networks and feature extraction powered by pre-trained models from the Open AI ecosystem (via Hugging Face [2]), our approach enables Autonomous Driving and Advanced Driver- Assistance Systems (AD/ADAS) to anticipate road surfaces, support adaptive learning for active road noise cancellation, and generate valuable insights for urban planning. The results of this study were specifically captured to support a compelling business case for next-generation automotive systems. This forward-looking approach not only promises to redefine passenger comfort and improve vehicle safety, but also paves the way for intelligent, data-driven urban road management, making the future of mobility both achievable and sustainable.
null
https://arxiv.org/abs/2506.11071v1
https://arxiv.org/pdf/2506.11071v1.pdf
null
[ "Renjith Rajagopal", "Peter Winzell", "Sladjana Strbac", "Konstantin Lindström", "Petter Hörling", "Faisal Kohestani", "Niloofar Mehrzad" ]
[ "Autonomous Driving" ]
2025-06-02T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/exploring-the-design-space-of-3d-mllms-for-ct
2506.21535
null
null
Exploring the Design Space of 3D MLLMs for CT Report Generation
Multimodal Large Language Models (MLLMs) have emerged as a promising way to automate Radiology Report Generation (RRG). In this work, we systematically investigate the design space of 3D MLLMs, including visual input representation, projectors, Large Language Models (LLMs), and fine-tuning techniques for 3D CT report generation. We also introduce two knowledge-based report augmentation methods that improve performance on the GREEN score by up to 10\%, achieving the 2nd place on the MICCAI 2024 AMOS-MM challenge. Our results on the 1,687 cases from the AMOS-MM dataset show that RRG is largely independent of the size of LLM under the same training protocol. We also show that larger volume size does not always improve performance if the original ViT was pre-trained on a smaller volume size. Lastly, we show that using a segmentation mask along with the CT volume improves performance. The code is publicly available at https://github.com/bowang-lab/AMOS-MM-Solution
We also show that larger volume size does not always improve performance if the original ViT was pre-trained on a smaller volume size.
https://arxiv.org/abs/2506.21535v1
https://arxiv.org/pdf/2506.21535v1.pdf
null
[ "Mohammed Baharoon", "Jun Ma", "Congyu Fang", "Augustin Toma", "Bo wang" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/lightweight-physics-informed-zero-shot
2506.21499
null
null
Lightweight Physics-Informed Zero-Shot Ultrasound Plane Wave Denoising
Ultrasound Coherent Plane Wave Compounding (CPWC) enhances image contrast by combining echoes from multiple steered transmissions. While increasing the number of angles generally improves image quality, it drastically reduces the frame rate and can introduce blurring artifacts in fast-moving targets. Moreover, compounded images remain susceptible to noise, particularly when acquired with a limited number of transmissions. We propose a zero-shot denoising framework tailored for low-angle CPWC acquisitions, which enhances contrast without relying on a separate training dataset. The method divides the available transmission angles into two disjoint subsets, each used to form compound images that include higher noise levels. The new compounded images are then used to train a deep model via a self-supervised residual learning scheme, enabling it to suppress incoherent noise while preserving anatomical structures. Because angle-dependent artifacts vary between the subsets while the underlying tissue response is similar, this physics-informed pairing allows the network to learn to disentangle the inconsistent artifacts from the consistent tissue signal. Unlike supervised methods, our model requires no domain-specific fine-tuning or paired data, making it adaptable across anatomical regions and acquisition setups. The entire pipeline supports efficient training with low computational cost due to the use of a lightweight architecture, which comprises only two convolutional layers. Evaluations on simulation, phantom, and in vivo data demonstrate superior contrast enhancement and structure preservation compared to both classical and deep learning-based denoising methods.
null
https://arxiv.org/abs/2506.21499v1
https://arxiv.org/pdf/2506.21499v1.pdf
null
[ "Hojat Asgariandehkordi", "Mostafa Sharifzadeh", "Hassan Rivaz" ]
[ "Denoising" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/generalizable-neural-electromagnetic-inverse
2506.21349
null
null
Generalizable Neural Electromagnetic Inverse Scattering
Solving Electromagnetic Inverse Scattering Problems (EISP) is fundamental in applications such as medical imaging, where the goal is to reconstruct the relative permittivity from scattered electromagnetic field. This inverse process is inherently ill-posed and highly nonlinear, making it particularly challenging. A recent machine learning-based approach, Img-Interiors, shows promising results by leveraging continuous implicit functions. However, it requires case-specific optimization, lacks generalization to unseen data, and fails under sparse transmitter setups (e.g., with only one transmitter). To address these limitations, we revisit EISP from a physics-informed perspective, reformulating it as a two stage inverse transmission-scattering process. This formulation reveals the induced current as a generalizable intermediate representation, effectively decoupling the nonlinear scattering process from the ill-posed inverse problem. Built on this insight, we propose the first generalizable physics-driven framework for EISP, comprising a current estimator and a permittivity solver, working in an end-to-end manner. The current estimator explicitly learns the induced current as a physical bridge between the incident and scattered field, while the permittivity solver computes the relative permittivity directly from the estimated induced current. This design enables data-driven training and generalizable feed-forward prediction of relative permittivity on unseen data while maintaining strong robustness to transmitter sparsity. Extensive experiments show that our method outperforms state-of-the-art approaches in reconstruction accuracy, generalization, and robustness. This work offers a fundamentally new perspective on electromagnetic inverse scattering and represents a major step toward cost-effective practical solutions for electromagnetic imaging.
null
https://arxiv.org/abs/2506.21349v1
https://arxiv.org/pdf/2506.21349v1.pdf
null
[ "Yizhe Cheng", "Chunxun Tian", "Haoru Wang", "Wentao Zhu", "Xiaoxuan Ma", "Yizhou Wang" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/semantic-scene-graph-for-ultrasound-image
2506.19683
null
null
Semantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance
Understanding medical ultrasound imaging remains a long-standing challenge due to significant visual variability caused by differences in imaging and acquisition parameters. Recent advancements in large language models (LLMs) have been used to automatically generate terminology-rich summaries orientated to clinicians with sufficient physiological knowledge. Nevertheless, the increasing demand for improved ultrasound interpretability and basic scanning guidance among non-expert users, e.g., in point-of-care settings, has not yet been explored. In this study, we first introduce the scene graph (SG) for ultrasound images to explain image content to ordinary and provide guidance for ultrasound scanning. The ultrasound SG is first computed using a transformer-based one-stage method, eliminating the need for explicit object detection. To generate a graspable image explanation for ordinary, the user query is then used to further refine the abstract SG representation through LLMs. Additionally, the predicted SG is explored for its potential in guiding ultrasound scanning toward missing anatomies within the current imaging view, assisting ordinary users in achieving more standardized and complete anatomical exploration. The effectiveness of this SG-based image explanation and scanning guidance has been validated on images from the left and right neck regions, including the carotid and thyroid, across five volunteers. The results demonstrate the potential of the method to maximally democratize ultrasound by enhancing its interpretability and usability for ordinaries.
null
https://arxiv.org/abs/2506.19683v2
https://arxiv.org/pdf/2506.19683v2.pdf
null
[ "Xuesong Li", "Dianye Huang", "Yameng Zhang", "Nassir Navab", "Zhongliang Jiang" ]
[]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ganet-seg-adversarial-learning-for-brain
2506.21245
null
null
GANet-Seg: Adversarial Learning for Brain Tumor Segmentation with Hybrid Generative Models
This work introduces a novel framework for brain tumor segmentation leveraging pre-trained GANs and Unet architectures. By combining a global anomaly detection module with a refined mask generation network, the proposed model accurately identifies tumor-sensitive regions and iteratively enhances segmentation precision using adversarial loss constraints. Multi-modal MRI data and synthetic image augmentation are employed to improve robustness and address the challenge of limited annotated datasets. Experimental results on the BraTS dataset demonstrate the effectiveness of the approach, achieving high sensitivity and accuracy in both lesion-wise Dice and HD95 metrics than the baseline. This scalable method minimizes the dependency on fully annotated data, paving the way for practical real-world applications in clinical settings.
null
https://arxiv.org/abs/2506.21245v1
https://arxiv.org/pdf/2506.21245v1.pdf
null
[ "Qifei Cui", "Xinyu Lu" ]
[ "Anomaly Detection", "Brain Tumor Segmentation", "Image Augmentation", "Sensitivity", "Tumor Segmentation" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/uncover-treasures-in-dct-advancing-jpeg
2506.21171
null
null
Uncover Treasures in DCT: Advancing JPEG Quality Enhancement by Exploiting Latent Correlations
Joint Photographic Experts Group (JPEG) achieves data compression by quantizing Discrete Cosine Transform (DCT) coefficients, which inevitably introduces compression artifacts. Most existing JPEG quality enhancement methods operate in the pixel domain, suffering from the high computational costs of decoding. Consequently, direct enhancement of JPEG images in the DCT domain has gained increasing attention. However, current DCT-domain methods often exhibit limited performance. To address this challenge, we identify two critical types of correlations within the DCT coefficients of JPEG images. Building on this insight, we propose an Advanced DCT-domain JPEG Quality Enhancement (AJQE) method that fully exploits these correlations. The AJQE method enables the adaptation of numerous well-established pixel-domain models to the DCT domain, achieving superior performance with reduced computational complexity. Compared to the pixel-domain counterparts, the DCT-domain models derived by our method demonstrate a 0.35 dB improvement in PSNR and a 60.5% increase in enhancement throughput on average.
null
https://arxiv.org/abs/2506.21171v1
https://arxiv.org/pdf/2506.21171v1.pdf
null
[ "Jing Yang", "Qunliang Xing", "Mai Xu", "Minglang Qiao" ]
[ "Data Compression" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Discrete Cosine Transform (DCT)** is an orthogonal transformation method that decomposes an\r\nimage to its spatial frequency spectrum. It expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. It is used a lot in compression tasks, e..g image compression where for example high-frequency components can be discarded. It is a type of Fourier-related Transform, similar to discrete fourier transforms (DFTs), but only using real numbers.\r\n\r\nImage Credit: [Wikipedia](https://en.wikipedia.org/wiki/Discrete_cosine_transform#/media/File:Example_dft_dct.svg)", "full_name": "Discrete Cosine Transform", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Fourier-related Transforms** are transforms related to Fourier Analysis. Below you can find a continuously updating list of transforms.", "name": "Fourier-related Transforms", "parent": null }, "name": "Discrete Cosine Transform", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/a-novel-framework-for-integrating-3d
2506.21162
null
null
A Novel Framework for Integrating 3D Ultrasound into Percutaneous Liver Tumour Ablation
3D ultrasound (US) imaging has shown significant benefits in enhancing the outcomes of percutaneous liver tumour ablation. Its clinical integration is crucial for transitioning 3D US into the therapeutic domain. However, challenges of tumour identification in US images continue to hinder its broader adoption. In this work, we propose a novel framework for integrating 3D US into the standard ablation workflow. We present a key component, a clinically viable 2D US-CT/MRI registration approach, leveraging 3D US as an intermediary to reduce registration complexity. To facilitate efficient verification of the registration workflow, we also propose an intuitive multimodal image visualization technique. In our study, 2D US-CT/MRI registration achieved a landmark distance error of approximately 2-4 mm with a runtime of 0.22s per image pair. Additionally, non-rigid registration reduced the mean alignment error by approximately 40% compared to rigid registration. Results demonstrated the efficacy of the proposed 2D US-CT/MRI registration workflow. Our integration framework advanced the capabilities of 3D US imaging in improving percutaneous tumour ablation, demonstrating the potential to expand the therapeutic role of 3D US in clinical interventions.
null
https://arxiv.org/abs/2506.21162v1
https://arxiv.org/pdf/2506.21162v1.pdf
null
[ "Shuwei Xing", "Derek W. Cool", "David Tessier", "Elvis C. S. Chen", "Terry M. Peters", "Aaron Fenster" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/development-of-mr-spectral-analysis-method
2506.20897
null
null
Development of MR spectral analysis method robust against static magnetic field inhomogeneity
Purpose:To develop a method that enhances the accuracy of spectral analysis in the presence of static magnetic field B0 inhomogeneity. Methods:The authors proposed a new spectral analysis method utilizing a deep learning model trained on modeled spectra that consistently represent the spectral variations induced by B0 inhomogeneity. These modeled spectra were generated from the B0 map and metabolite ratios of the healthy human brain. The B0 map was divided into a patch size of subregions, and the separately estimated metabolites and baseline components were averaged and then integrated. The quality of the modeled spectra was visually and quantitatively evaluated against the measured spectra. The analysis models were trained using measured, simulated, and modeled spectra. The performance of the proposed method was assessed using mean squared errors (MSEs) of metabolite ratios. The mean absolute percentage errors (MAPEs) of the metabolite ratios were also compared to LCModel when analyzing the phantom spectra acquired under two types of B0 inhomogeneity. Results:The modeled spectra exhibited broadened and narrowed spectral peaks depending on the B0 inhomogeneity and were quantitatively close to the measured spectra. The analysis model trained using measured spectra with modeled spectra improved MSEs by 49.89% compared to that trained using measured spectra alone, and by 26.66% compared to that trained using measured spectra with simulated spectra. The performance improved as the number of modeled spectra increased from 0 to 1,000. This model showed significantly lower MAPEs than LCModel under both types of B0 inhomogeneity. Conclusion:A new spectral analysis-trained deep learning model using the modeled spectra was developed. The results suggest that the proposed method has the potential to improve the accuracy of spectral analysis by increasing the training samples of spectra.
null
https://arxiv.org/abs/2506.20897v1
https://arxiv.org/pdf/2506.20897v1.pdf
null
[ "Shuki Maruyama", "Hidenori Takeshima" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dsa-nrp-no-reflow-prediction-from
2506.17501
null
null
DSA-NRP: No-Reflow Prediction from Angiographic Perfusion Dynamics in Stroke EVT
Following successful large-vessel recanalization via endovascular thrombectomy (EVT) for acute ischemic stroke (AIS), some patients experience a complication known as no-reflow, defined by persistent microvascular hypoperfusion that undermines tissue recovery and worsens clinical outcomes. Although prompt identification is crucial, standard clinical practice relies on perfusion magnetic resonance imaging (MRI) within 24 hours post-procedure, delaying intervention. In this work, we introduce the first-ever machine learning (ML) framework to predict no-reflow immediately after EVT by leveraging previously unexplored intra-procedural digital subtraction angiography (DSA) sequences and clinical variables. Our retrospective analysis included AIS patients treated at UCLA Medical Center (2011-2024) who achieved favorable mTICI scores (2b-3) and underwent pre- and post-procedure MRI. No-reflow was defined as persistent hypoperfusion (Tmax > 6 s) on post-procedural imaging. From DSA sequences (AP and lateral views), we extracted statistical and temporal perfusion features from the target downstream territory to train ML classifiers for predicting no-reflow. Our novel method significantly outperformed a clinical-features baseline(AUC: 0.7703 $\pm$ 0.12 vs. 0.5728 $\pm$ 0.12; accuracy: 0.8125 $\pm$ 0.10 vs. 0.6331 $\pm$ 0.09), demonstrating that real-time DSA perfusion dynamics encode critical insights into microvascular integrity. This approach establishes a foundation for immediate, accurate no-reflow prediction, enabling clinicians to proactively manage high-risk patients without reliance on delayed imaging.
null
https://arxiv.org/abs/2506.17501v2
https://arxiv.org/pdf/2506.17501v2.pdf
null
[ "Shreeram Athreya", "Carlos Olivares", "Ameera Ismail", "Kambiz Nael", "William Speier", "Corey Arnold" ]
[]
2025-06-20T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/papanicolaou-stain-unmixing-for-rgb-image
2506.20450
null
null
Papanicolaou Stain Unmixing for RGB Image Using Weighted Nucleus Sparsity and Total Variation Regularization
The Papanicolaou stain, consisting of eosin Y, hematoxylin, light Green SF yellowish, orange G, and Bismarck brown Y, provides extensive color information essential for cervical cancer screening in cytopathology. However, the visual observation of these colors is subjective and difficult to characterize. In digital image analysis, the RGB intensities are affected by staining and imaging variations, hindering direct quantification of color in Papanicolaou-stained samples. Stain unmixing is a promising alternative that quantifies the amounts of dyes. In previous work, multispectral imaging was utilized to estimate the dye amounts of Papanicolaou stain for quantitative diagnosis. Still, its application to RGB images presents a challenge since the number of dyes exceeds the three RGB channels. This paper proposes a novel Papanicolaou stain unmixing method for RGB images that incorporates three key assumptions: nonnegative stain abundances; a sparse spatial distribution of hematoxylin, which binds to nuclei; and piecewise smoothness of stain abundances. By formulating this as an optimization problem with nonnegativity, weighted nucleus sparsity, and total variation regularizations, our method achieved excellent performance in stain quantification when validated against the results of multispectral imaging. We also adopted the proposed method for discriminating lobular endocervical glandular hyperplasia (LEGH), a precancerous lesion of gastric-type adenocarcinoma of the cervix. The resulting quantification distinctly characterized differences between LEGH and normal endocervical cells with stain abundance, and a classifier based on the quantification results achieved 98.0% accuracy. This demonstrates the significant potential of RGB-based stain unmixing for quantitative diagnosis.
This paper proposes a novel Papanicolaou stain unmixing method for RGB images that incorporates three key assumptions: nonnegative stain abundances; a sparse spatial distribution of hematoxylin, which binds to nuclei; and piecewise smoothness of stain abundances.
https://arxiv.org/abs/2506.20450v1
https://arxiv.org/pdf/2506.20450v1.pdf
null
[ "Nanxin Gong", "Saori Takeyama", "Masahiro Yamaguchi", "Takumi Urata", "Fumikazu Kimura", "Keiko Ishii" ]
[]
2025-06-25T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/transformer-based-multi-target-bernoulli
2506.20319
null
null
Transformer Based Multi-Target Bernoulli Tracking for Maritime Radar
Multi-target tracking in the maritime domain is a challenging problem due to the non-Gaussian and fluctuating characteristics of sea clutter. This article investigates the use of machine learning (ML) to the detection and tracking of low SIR targets in the maritime domain. The proposed method uses a transformer to extract point measurements from range-azimuth maps, before clustering and tracking using the Labelled mulit- Bernoulli (LMB) filter. A measurement driven birth density design based on the transformer attention maps is also developed. The error performance of the transformer based approach is presented and compared with a constant false alarm rate (CFAR) detection technique. The LMB filter is run in two scenarios, an ideal birth approach, and the measurement driven birth approach. Experiments indicate that the transformer based method has superior performance to the CFAR approach for all target scenarios discussed
null
https://arxiv.org/abs/2506.20319v1
https://arxiv.org/pdf/2506.20319v1.pdf
null
[ "Caden Sweeney", "Du Yong Kim", "Branko Ristic", "Brian Cheung" ]
[]
2025-06-25T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/volumetric-segmentation-of-muscle
2506.20206
null
null
Volumetric segmentation of muscle compartments using in vivo imaging and architectural validation in human finger flexors
Segmenting muscle compartments and measuring their architecture can facilitate movement function assessment, accurate musculoskeletal modeling, and synergy-based electromyogram simulation. Here, we presented a novel method for volumetric segmentation of muscle compartments using in vivo imaging, focusing on the independent compartments for finger control of flexor digitorum superficialis (FDS). Besides, we measured the architectural properties of FDS compartments and validated the segmentation. Specifically, ultrasound and magnetic resonance imaging (MRI) from 10 healthy subjects were used for segmentation and measurement, while electromyography was utilized for validation. A two-step piecewise segmentation was proposed, first annotating compartment regions in the cross-sectional ultrasound image based on compartment movement, and then performing minimum energy matching to register the ultrasound data to the three-dimensional MRI coordinate system. Additionally, the architectural properties were measured in the compartment masks from the segmentation using MRI tractography. Anatomical correctness was verified by comparing known anatomy with reconstructed fiber tracts and measured properties, while segmentation accuracy was quantified as the percentage of finger electromyogram centers falling within their corresponding compartments. Results demonstrated agreement for the fiber orientation between the tractography and cadaveric photographs. Significant differences in architectural properties (P < 0.001) were observed between compartments. The properties of FDS and its compartments were within the physiological ranges (P < 0.01). 95% (38/40) of the electromyogram centers were located within respective compartments, with 2 errors occurring in the index and little fingers. The validated segmentation method and derived architectural properties may advance biomedical applications.
null
https://arxiv.org/abs/2506.20206v1
https://arxiv.org/pdf/2506.20206v1.pdf
null
[ "Yang Li" ]
[ "Anatomy", "Segmentation" ]
2025-06-25T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/u-r-veda-integrating-unet-residual-links-edge
2506.20689
null
null
U-R-VEDA: Integrating UNET, Residual Links, Edge and Dual Attention, and Vision Transformer for Accurate Semantic Segmentation of CMRs
Artificial intelligence, including deep learning models, will play a transformative role in automated medical image analysis for the diagnosis of cardiac disorders and their management. Automated accurate delineation of cardiac images is the first necessary initial step for the quantification and automated diagnosis of cardiac disorders. In this paper, we propose a deep learning based enhanced UNet model, U-R-Veda, which integrates convolution transformations, vision transformer, residual links, channel-attention, and spatial attention, together with edge-detection based skip-connections for an accurate fully-automated semantic segmentation of cardiac magnetic resonance (CMR) images. The model extracts local-features and their interrelationships using a stack of combination convolution blocks, with embedded channel and spatial attention in the convolution block, and vision transformers. Deep embedding of channel and spatial attention in the convolution block identifies important features and their spatial localization. The combined edge information with channel and spatial attention as skip connection reduces information-loss during convolution transformations. The overall model significantly improves the semantic segmentation of CMR images necessary for improved medical image analysis. An algorithm for the dual attention module (channel and spatial attention) has been presented. Performance results show that U-R-Veda achieves an average accuracy of 95.2%, based on DSC metrics. The model outperforms the accuracy attained by other models, based on DSC and HD metrics, especially for the delineation of right-ventricle and left-ventricle-myocardium.
null
https://arxiv.org/abs/2506.20689v1
https://arxiv.org/pdf/2506.20689v1.pdf
null
[ "Racheal Mukisa", "Arvind K. Bansal" ]
[ "Edge Detection", "Medical Image Analysis", "Semantic Segmentation" ]
2025-06-25T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/building-lightweight-semantic-segmentation
2506.20688
null
null
Building Lightweight Semantic Segmentation Models for Aerial Images Using Dual Relation Distillation
Recently, there have been significant improvements in the accuracy of CNN models for semantic segmentation. However, these models are often heavy and suffer from low inference speed, which limits their practical application. To address this issue, knowledge distillation has emerged as a promising approach to achieve a good trade-off between segmentation accuracy and efficiency. In this paper, we propose a novel dual relation distillation (DRD) technique that transfers both spatial and channel relations in feature maps from a cumbersome model (teacher) to a compact model (student). Specifically, we compute spatial and channel relation maps separately for the teacher and student models, and then align corresponding relation maps by minimizing their distance. Since the teacher model usually learns more information and collects richer spatial and channel correlations than the student model, transferring these correlations from the teacher to the student can help the student mimic the teacher better in terms of feature distribution, thus improving the segmentation accuracy of the student model. We conduct comprehensive experiments on three segmentation datasets, including two widely adopted benchmarks in the remote sensing field (Vaihingen and Potsdam datasets) and one popular benchmark in general scene (Cityscapes dataset). The experimental results demonstrate that our novel distillation framework can significantly boost the performance of the student network without incurring extra computational overhead.
null
https://arxiv.org/abs/2506.20688v1
https://arxiv.org/pdf/2506.20688v1.pdf
null
[ "Minglong Li", "Lianlei Shan", "Weiqiang Wang", "Ke Lv", "Bin Luo", "Si-Bao Chen" ]
[ "Knowledge Distillation", "Relation", "Segmentation", "Semantic Segmentation" ]
2025-06-25T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://research.google/blog/auto-generated-summaries-in-google-docs/", "description": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.\r\nSource: [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531)", "full_name": "Knowledge Distillation", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Knowledge Distillation", "parent": null }, "name": "Knowledge Distillation", "source_title": "Distilling the Knowledge in a Neural Network", "source_url": "http://arxiv.org/abs/1503.02531v1" }, { "code_snippet_url": "", "description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.", "full_name": "ALIGN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)", "name": "Vision and Language Pre-Trained Models", "parent": null }, "name": "ALIGN", "source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision", "source_url": "https://arxiv.org/abs/2102.05918v2" } ]
https://paperswithcode.com/paper/global-and-local-contrastive-learning-for
2506.20683
null
null
Global and Local Contrastive Learning for Joint Representations from Cardiac MRI and ECG
An electrocardiogram (ECG) is a widely used, cost-effective tool for detecting electrical abnormalities in the heart. However, it cannot directly measure functional parameters, such as ventricular volumes and ejection fraction, which are crucial for assessing cardiac function. Cardiac magnetic resonance (CMR) is the gold standard for these measurements, providing detailed structural and functional insights, but is expensive and less accessible. To bridge this gap, we propose PTACL (Patient and Temporal Alignment Contrastive Learning), a multimodal contrastive learning framework that enhances ECG representations by integrating spatio-temporal information from CMR. PTACL uses global patient-level contrastive loss and local temporal-level contrastive loss. The global loss aligns patient-level representations by pulling ECG and CMR embeddings from the same patient closer together, while pushing apart embeddings from different patients. Local loss enforces fine-grained temporal alignment within each patient by contrasting encoded ECG segments with corresponding encoded CMR frames. This approach enriches ECG representations with diagnostic information beyond electrical activity and transfers more insights between modalities than global alignment alone, all without introducing new learnable weights. We evaluate PTACL on paired ECG-CMR data from 27,951 subjects in the UK Biobank. Compared to baseline approaches, PTACL achieves better performance in two clinically relevant tasks: (1) retrieving patients with similar cardiac phenotypes and (2) predicting CMR-derived cardiac function parameters, such as ventricular volumes and ejection fraction. Our results highlight the potential of PTACL to enhance non-invasive cardiac diagnostics using ECG. The code is available at: https://github.com/alsalivan/ecgcmr
To bridge this gap, we propose PTACL (Patient and Temporal Alignment Contrastive Learning), a multimodal contrastive learning framework that enhances ECG representations by integrating spatio-temporal information from CMR.
https://arxiv.org/abs/2506.20683v1
https://arxiv.org/pdf/2506.20683v1.pdf
null
[ "Alexander Selivanov", "Philip Müller", "Özgün Turgut", "Nil Stolt-Ansó", "Daniel Rückert" ]
[ "Contrastive Learning", "Diagnostic" ]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": null, "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "Contrastive Learning", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/systematic-review-of-pituitary-gland-and
2506.19797
null
null
Systematic Review of Pituitary Gland and Pituitary Adenoma Automatic Segmentation Techniques in Magnetic Resonance Imaging
Purpose: Accurate segmentation of both the pituitary gland and adenomas from magnetic resonance imaging (MRI) is essential for diagnosis and treatment of pituitary adenomas. This systematic review evaluates automatic segmentation methods for improving the accuracy and efficiency of MRI-based segmentation of pituitary adenomas and the gland itself. Methods: We reviewed 34 studies that employed automatic and semi-automatic segmentation methods. We extracted and synthesized data on segmentation techniques and performance metrics (such as Dice overlap scores). Results: The majority of reviewed studies utilized deep learning approaches, with U-Net-based models being the most prevalent. Automatic methods yielded Dice scores of 0.19--89.00\% for pituitary gland and 4.60--96.41\% for adenoma segmentation. Semi-automatic methods reported 80.00--92.10\% for pituitary gland and 75.90--88.36\% for adenoma segmentation. Conclusion: Most studies did not report important metrics such as MR field strength, age and adenoma size. Automated segmentation techniques such as U-Net-based models show promise, especially for adenoma segmentation, but further improvements are needed to achieve consistently good performance in small structures like the normal pituitary gland. Continued innovation and larger, diverse datasets are likely critical to enhancing clinical applicability.
null
https://arxiv.org/abs/2506.19797v1
https://arxiv.org/pdf/2506.19797v1.pdf
null
[ "Mubaraq Yakubu", "Navodini Wijethilake", "Jonathan Shapey", "Andrew King", "Alexander Hammers" ]
[ "Segmentation" ]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/nerf-based-cbct-reconstruction-needs
2506.19742
null
null
NeRF-based CBCT Reconstruction needs Normalization and Initialization
Cone Beam Computed Tomography (CBCT) is widely used in medical imaging. However, the limited number and intensity of X-ray projections make reconstruction an ill-posed problem with severe artifacts. NeRF-based methods have achieved great success in this task. However, they suffer from a local-global training mismatch between their two key components: the hash encoder and the neural network. Specifically, in each training step, only a subset of the hash encoder's parameters is used (local sparse), whereas all parameters in the neural network participate (global dense). Consequently, hash features generated in each step are highly misaligned, as they come from different subsets of the hash encoder. These misalignments from different training steps are then fed into the neural network, causing repeated inconsistent global updates in training, which leads to unstable training, slower convergence, and degraded reconstruction quality. Aiming to alleviate the impact of this local-global optimization mismatch, we introduce a Normalized Hash Encoder, which enhances feature consistency and mitigates the mismatch. Additionally, we propose a Mapping Consistency Initialization(MCI) strategy that initializes the neural network before training by leveraging the global mapping property from a well-trained model. The initialized neural network exhibits improved stability during early training, enabling faster convergence and enhanced reconstruction performance. Our method is simple yet effective, requiring only a few lines of code while substantially improving training efficiency on 128 CT cases collected from 4 different datasets, covering 7 distinct anatomical regions.
However, they suffer from a local-global training mismatch between their two key components: the hash encoder and the neural network.
https://arxiv.org/abs/2506.19742v1
https://arxiv.org/pdf/2506.19742v1.pdf
null
[ "Zhuowei Xu", "Han Li", "Dai Sun", "Zhicheng Li", "Yujia Li", "Qingpeng Kong", "Zhiwei Cheng", "Nassir Navab", "S. Kevin Zhou" ]
[ "global-optimization", "NeRF" ]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/recognet-recurrent-context-guided-network-for
2506.19687
null
null
ReCoGNet: Recurrent Context-Guided Network for 3D MRI Prostate Segmentation
Prostate gland segmentation from T2-weighted MRI is a critical yet challenging task in clinical prostate cancer assessment. While deep learning-based methods have significantly advanced automated segmentation, most conventional approaches-particularly 2D convolutional neural networks (CNNs)-fail to leverage inter-slice anatomical continuity, limiting their accuracy and robustness. Fully 3D models offer improved spatial coherence but require large amounts of annotated data, which is often impractical in clinical settings. To address these limitations, we propose a hybrid architecture that models MRI sequences as spatiotemporal data. Our method uses a deep, pretrained DeepLabV3 backbone to extract high-level semantic features from each MRI slice and a recurrent convolutional head, built with ConvLSTM layers, to integrate information across slices while preserving spatial structure. This combination enables context-aware segmentation with improved consistency, particularly in data-limited and noisy imaging conditions. We evaluate our method on the PROMISE12 benchmark under both clean and contrast-degraded test settings. Compared to state-of-the-art 2D and 3D segmentation models, our approach demonstrates superior performance in terms of precision, recall, Intersection over Union (IoU), and Dice Similarity Coefficient (DSC), highlighting its potential for robust clinical deployment.
null
https://arxiv.org/abs/2506.19687v1
https://arxiv.org/pdf/2506.19687v1.pdf
null
[ "Ahmad Mustafa", "Reza Rastegar", "Ghassan AlRegib" ]
[ "Segmentation" ]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**ConvLSTM** is a type of recurrent neural network for spatio-temporal prediction that has convolutional structures in both the input-to-state and state-to-state transitions. The ConvLSTM determines the future state of a certain cell in the grid by the inputs and past states of its local neighbors. This can easily be achieved by using a [convolution](https://paperswithcode.com/method/convolution) operator in the state-to-state and input-to-state transitions (see Figure). The key equations of ConvLSTM are shown below, where $∗$ denotes the convolution operator and $\\odot$ the Hadamard product:\r\n\r\n$$ i\\_{t} = \\sigma\\left(W\\_{xi} ∗ X\\_{t} + W\\_{hi} ∗ H\\_{t−1} + W\\_{ci} \\odot \\mathcal{C}\\_{t−1} + b\\_{i}\\right) $$\r\n\r\n$$ f\\_{t} = \\sigma\\left(W\\_{xf} ∗ X\\_{t} + W\\_{hf} ∗ H\\_{t−1} + W\\_{cf} \\odot \\mathcal{C}\\_{t−1} + b\\_{f}\\right) $$\r\n\r\n$$ \\mathcal{C}\\_{t} = f\\_{t} \\odot \\mathcal{C}\\_{t−1} + i\\_{t} \\odot \\text{tanh}\\left(W\\_{xc} ∗ X\\_{t} + W\\_{hc} ∗ \\mathcal{H}\\_{t−1} + b\\_{c}\\right) $$\r\n\r\n$$ o\\_{t} = \\sigma\\left(W\\_{xo} ∗ X\\_{t} + W\\_{ho} ∗ \\mathcal{H}\\_{t−1} + W\\_{co} \\odot \\mathcal{C}\\_{t} + b\\_{o}\\right) $$\r\n\r\n$$ \\mathcal{H}\\_{t} = o\\_{t} \\odot \\text{tanh}\\left(C\\_{t}\\right) $$\r\n\r\nIf we view the states as the hidden representations of moving objects, a ConvLSTM with a larger transitional kernel should be able to capture faster motions while one with a smaller kernel can capture slower motions. \r\n\r\nTo ensure that the states have the same number of rows and same number of columns as the inputs, padding is needed before applying the convolution operation. Here, padding of the hidden states on the boundary points can be viewed as using the state of the outside world for calculation. Usually, before the first input comes, we initialize all the states of the [LSTM](https://paperswithcode.com/method/lstm) to zero which corresponds to \"total ignorance\" of the future.", "full_name": "ConvLSTM", "introduced_year": 2000, "main_collection": { "area": "Sequential", "description": "", "name": "Recurrent Neural Networks", "parent": null }, "name": "ConvLSTM", "source_title": "Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting", "source_url": "http://arxiv.org/abs/1506.04214v2" } ]
https://paperswithcode.com/paper/crossmoda-challenge-evolution-of-cross
2506.12006
null
null
crossMoDA Challenge: Evolution of Cross-Modality Domain Adaptation Techniques for Vestibular Schwannoma and Cochlea Segmentation from 2021 to 2023
The cross-Modality Domain Adaptation (crossMoDA) challenge series, initiated in 2021 in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), focuses on unsupervised cross-modality segmentation, learning from contrast-enhanced T1 (ceT1) and transferring to T2 MRI. The task is an extreme example of domain shift chosen to serve as a meaningful and illustrative benchmark. From a clinical application perspective, it aims to automate Vestibular Schwannoma (VS) and cochlea segmentation on T2 scans for more cost-effective VS management. Over time, the challenge objectives have evolved to enhance its clinical relevance. The challenge evolved from using single-institutional data and basic segmentation in 2021 to incorporating multi-institutional data and Koos grading in 2022, and by 2023, it included heterogeneous routine data and sub-segmentation of intra- and extra-meatal tumour components. In this work, we report the findings of the 2022 and 2023 editions and perform a retrospective analysis of the challenge progression over the years. The observations from the successive challenge contributions indicate that the number of outliers decreases with an expanding dataset. This is notable since the diversity of scanning protocols of the datasets concurrently increased. The winning approach of the 2023 edition reduced the number of outliers on the 2021 and 2022 testing data, demonstrating how increased data heterogeneity can enhance segmentation performance even on homogeneous data. However, the cochlea Dice score declined in 2023, likely due to the added complexity from tumour sub-annotations affecting overall segmentation performance. While progress is still needed for clinically acceptable VS segmentation, the plateauing performance suggests that a more challenging cross-modal task may better serve future benchmarking.
null
https://arxiv.org/abs/2506.12006v2
https://arxiv.org/pdf/2506.12006v2.pdf
null
[ "Navodini Wijethilake", "Reuben Dorent", "Marina Ivory", "Aaron Kujawa", "Stefan Cornelissen", "Patrick Langenhuizen", "Mohamed Okasha", "Anna Oviedova", "Hexin Dong", "Bogyeong Kang", "Guillaume Sallé", "Luyi Han", "Ziyuan Zhao", "Han Liu", "Tao Yang", "Shahad Hardan", "Hussain Alasmawi", "Santosh Sanjeev", "Yuzhou Zhuang", "Satoshi Kondo", "Maria Baldeon Calisto", "Shaikh Muhammad Uzair Noman", "Cancan Chen", "Ipek Oguz", "Rongguo Zhang", "Mina Rezaei", "Susana K. Lai-Yuen", "Satoshi Kasai", "Chih-Cheng Hung", "Mohammad Yaqub", "Lisheng Wang", "Benoit M. Dawant", "Cuntai Guan", "Ritse Mann", "Vincent Jaouen", "Ji-Wung Han", "Li Zhang", "Jonathan Shapey", "Tom Vercauteren" ]
[ "Benchmarking", "Domain Adaptation", "Segmentation" ]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/vision-transformer-based-time-series-image
2506.19591
null
null
Vision Transformer-Based Time-Series Image Reconstruction for Cloud-Filling Applications
Cloud cover in multispectral imagery (MSI) poses significant challenges for early season crop mapping, as it leads to missing or corrupted spectral information. Synthetic aperture radar (SAR) data, which is not affected by cloud interference, offers a complementary solution, but lack sufficient spectral detail for precise crop mapping. To address this, we propose a novel framework, Time-series MSI Image Reconstruction using Vision Transformer (ViT), to reconstruct MSI data in cloud-covered regions by leveraging the temporal coherence of MSI and the complementary information from SAR from the attention mechanism. Comprehensive experiments, using rigorous reconstruction evaluation metrics, demonstrate that Time-series ViT framework significantly outperforms baselines that use non-time-series MSI and SAR or time-series MSI without SAR, effectively enhancing MSI image reconstruction in cloud-covered regions.
null
https://arxiv.org/abs/2506.19591v1
https://arxiv.org/pdf/2506.19591v1.pdf
null
[ "Lujun Li", "Yiqun Wang", "Radu State" ]
[ "Image Reconstruction", "Time Series" ]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": "https://github.com/google-research/vision_transformer", "description": "The **Vision Transformer**, or **ViT**, is a model for image classification that employs a [Transformer](https://paperswithcode.com/method/transformer)-like architecture over patches of the image. An image is split into fixed-size patches, each of them are then linearly embedded, position embeddings are added, and the resulting sequence of vectors is fed to a standard [Transformer](https://paperswithcode.com/method/transformer) encoder. In order to perform classification, the standard approach of adding an extra learnable “classification token” to the sequence is used.", "full_name": "Vision Transformer", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.", "name": "Image Models", "parent": null }, "name": "Vision Transformer", "source_title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "source_url": "https://arxiv.org/abs/2010.11929v2" } ]
https://paperswithcode.com/paper/learning-from-anatomy-supervised-anatomical
2506.19590
null
null
Learning from Anatomy: Supervised Anatomical Pretraining (SAP) for Improved Metastatic Bone Disease Segmentation in Whole-Body MRI
The segmentation of metastatic bone disease (MBD) in whole-body MRI (WB-MRI) is a challenging problem. Due to varying appearances and anatomical locations of lesions, ambiguous boundaries, and severe class imbalance, obtaining reliable segmentations requires large, well-annotated datasets capturing lesion variability. Generating such datasets requires substantial time and expertise, and is prone to error. While self-supervised learning (SSL) can leverage large unlabeled datasets, learned generic representations often fail to capture the nuanced features needed for accurate lesion detection. In this work, we propose a Supervised Anatomical Pretraining (SAP) method that learns from a limited dataset of anatomical labels. First, an MRI-based skeletal segmentation model is developed and trained on WB-MRI scans from healthy individuals for high-quality skeletal delineation. Then, we compare its downstream efficacy in segmenting MBD on a cohort of 44 patients with metastatic prostate cancer, against both a baseline random initialization and a state-of-the-art SSL method. SAP significantly outperforms both the baseline and SSL-pretrained models, achieving a normalized surface Dice of 0.76 and a Dice coefficient of 0.64. The method achieved a lesion detection F2 score of 0.44, improving on 0.24 (baseline) and 0.31 (SSL). When considering only clinically relevant lesions larger than 1~ml, SAP achieves a detection sensitivity of 100% in 28 out of 32 patients. Learning bone morphology from anatomy yields an effective and domain-relevant inductive bias that can be leveraged for the downstream segmentation task of bone lesions. All code and models are made publicly available.
null
https://arxiv.org/abs/2506.19590v1
https://arxiv.org/pdf/2506.19590v1.pdf
null
[ "Joris Wuts", "Jakub Ceranka", "Nicolas Michoux", "Frédéric Lecouvet", "Jef Vandemeulebroucke" ]
[ "Anatomy", "Inductive Bias", "Lesion Detection", "Self-Supervised Learning" ]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/angio-diff-learning-a-self-supervised
2506.19455
null
null
Angio-Diff: Learning a Self-Supervised Adversarial Diffusion Model for Angiographic Geometry Generation
Vascular diseases pose a significant threat to human health, with X-ray angiography established as the gold standard for diagnosis, allowing for detailed observation of blood vessels. However, angiographic X-rays expose personnel and patients to higher radiation levels than non-angiographic X-rays, which are unwanted. Thus, modality translation from non-angiographic to angiographic X-rays is desirable. Data-driven deep approaches are hindered by the lack of paired large-scale X-ray angiography datasets. While making high-quality vascular angiography synthesis crucial, it remains challenging. We find that current medical image synthesis primarily operates at pixel level and struggles to adapt to the complex geometric structure of blood vessels, resulting in unsatisfactory quality of blood vessel image synthesis, such as disconnections or unnatural curvatures. To overcome this issue, we propose a self-supervised method via diffusion models to transform non-angiographic X-rays into angiographic X-rays, mitigating data shortages for data-driven approaches. Our model comprises a diffusion model that learns the distribution of vascular data from diffusion latent, a generator for vessel synthesis, and a mask-based adversarial module. To enhance geometric accuracy, we propose a parametric vascular model to fit the shape and distribution of blood vessels. The proposed method contributes a pipeline and a synthetic dataset for X-ray angiography. We conducted extensive comparative and ablation experiments to evaluate the Angio-Diff. The results demonstrate that our method achieves state-of-the-art performance in synthetic angiography image quality and more accurately synthesizes the geometric structure of blood vessels. The code is available at https://github.com/zfw-cv/AngioDiff.
The proposed method contributes a pipeline and a synthetic dataset for X-ray angiography.
https://arxiv.org/abs/2506.19455v1
https://arxiv.org/pdf/2506.19455v1.pdf
null
[ "Zhifeng Wang", "Renjiao Yi", "Xin Wen", "Chenyang Zhu", "Kai Xu", "Kunlun He" ]
[ "Image Generation" ]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/naada-a-noise-aware-attention-denoising
2506.19387
null
null
NAADA: A Noise-Aware Attention Denoising Autoencoder for Dental Panoramic Radiographs
Convolutional denoising autoencoders (DAEs) are powerful tools for image restoration. However, they inherit a key limitation of convolutional neural networks (CNNs): they tend to recover low-frequency features, such as smooth regions, more effectively than high-frequency details. This leads to the loss of fine details, which is particularly problematic in dental radiographs where preserving subtle anatomical structures is crucial. While self-attention mechanisms can help mitigate this issue by emphasizing important features, conventional attention methods often prioritize features corresponding to cleaner regions and may overlook those obscured by noise. To address this limitation, we propose a noise-aware self-attention method, which allows the model to effectively focus on and recover key features even within noisy regions. Building on this approach, we introduce the noise-aware attention-enhanced denoising autoencoder (NAADA) network for enhancing noisy panoramic dental radiographs. Compared with the recent state of the art (and much heavier) methods like Uformer, MResDNN etc., our method improves the reconstruction of fine details, ensuring better image quality and diagnostic accuracy.
null
https://arxiv.org/abs/2506.19387v1
https://arxiv.org/pdf/2506.19387v1.pdf
null
[ "Khuram Naveed", "Bruna Neves de Freitas", "Ruben Pauwels" ]
[ "Denoising", "Diagnostic", "Image Restoration" ]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "A **Denoising Autoencoder** is a modification on the [autoencoder](https://paperswithcode.com/method/autoencoder) to prevent the network learning the identity function. Specifically, if the autoencoder is too big, then it can just learn the data, so the output equals the input, and does not perform any useful representation learning or dimensionality reduction. Denoising autoencoders solve this problem by corrupting the input data on purpose, adding noise or masking some of the input values.\r\n\r\nImage Credit: [Kumar et al](https://www.semanticscholar.org/paper/Static-hand-gesture-recognition-using-stacked-Kumar-Nandi/5191ddf3f0841c89ba9ee592a2f6c33e4a40d4bf)", "full_name": "Denoising Autoencoder", "introduced_year": 2008, "main_collection": { "area": "Computer Vision", "description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.", "name": "Generative Models", "parent": null }, "name": "Denoising Autoencoder", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/reconsidering-explicit-longitudinal
2506.19363
null
null
Reconsidering Explicit Longitudinal Mammography Alignment for Enhanced Breast Cancer Risk Prediction
Regular mammography screening is essential for early breast cancer detection. Deep learning-based risk prediction methods have sparked interest to adjust screening intervals for high-risk groups. While early methods focused only on current mammograms, recent approaches leverage the temporal aspect of screenings to track breast tissue changes over time, requiring spatial alignment across different time points. Two main strategies for this have emerged: explicit feature alignment through deformable registration and implicit learned alignment using techniques like transformers, with the former providing more control. However, the optimal approach for explicit alignment in mammography remains underexplored. In this study, we provide insights into where explicit alignment should occur (input space vs. representation space) and if alignment and risk prediction should be jointly optimized. We demonstrate that jointly learning explicit alignment in representation space while optimizing risk estimation performance, as done in the current state-of-the-art approach, results in a trade-off between alignment quality and predictive performance and show that image-level alignment is superior to representation-level alignment, leading to better deformation field quality and enhanced risk prediction accuracy. The code is available at https://github.com/sot176/Longitudinal_Mammogram_Alignment.git.
However, the optimal approach for explicit alignment in mammography remains underexplored.
https://arxiv.org/abs/2506.19363v1
https://arxiv.org/pdf/2506.19363v1.pdf
null
[ "Solveig Thrun", "Stine Hansen", "Zijun Sun", "Nele Blum", "Suaiba A. Salahuddin", "Kristoffer Wickstrøm", "Elisabeth Wetzer", "Robert Jenssen", "Maik Stille", "Michael Kampffmeyer" ]
[ "Breast Cancer Detection", "Prediction" ]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/explicit-residual-based-scalable-image-coding
2506.19297
null
null
Explicit Residual-Based Scalable Image Coding for Humans and Machines
Scalable image compression is a technique that progressively reconstructs multiple versions of an image for different requirements. In recent years, images have increasingly been consumed not only by humans but also by image recognition models. This shift has drawn growing attention to scalable image compression methods that serve both machine and human vision (ICMH). Many existing models employ neural network-based codecs, known as learned image compression, and have made significant strides in this field by carefully designing the loss functions. In some cases, however, models are overly reliant on their learning capacity, and their architectural design is not sufficiently considered. In this paper, we enhance the coding efficiency and interpretability of ICMH framework by integrating an explicit residual compression mechanism, which is commonly employed in resolution scalable coding methods such as JPEG2000. Specifically, we propose two complementary methods: Feature Residual-based Scalable Coding (FR-ICMH) and Pixel Residual-based Scalable Coding (PR-ICMH). These proposed methods are applicable to various machine vision tasks. Moreover, they provide flexibility to choose between encoder complexity and compression performance, making it adaptable to diverse application requirements. Experimental results demonstrate the effectiveness of our proposed methods, with PR-ICMH achieving up to 29.57% BD-rate savings over the previous work.
null
https://arxiv.org/abs/2506.19297v1
https://arxiv.org/pdf/2506.19297v1.pdf
null
[ "Yui Tatsumi", "Ziyue Zeng", "Hiroshi Watanabe" ]
[ "Image Compression" ]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dataset-of-soil-images-with-corresponding
2506.17469
null
null
Dataset of soil images with corresponding particle size distributions for photogranulometry
Traditional particle size distribution (PSD) analyses create significant downtime and are expensive in labor and maintenance. These drawbacks could be alleviated using optical grain size analysis integrated into routine geotechnical laboratory workflow. This paper presents a high-resolution dataset of 12,714 images of 321 different soil samples collected in the Montreal, Quebec region, alongside their PSD analysis. It is designed to provide a robust starting point for training convolutional neural networks (CNN) in geotechnical applications. Soil samples were photographed in a standardized top-view position with a resolution of 45 MP and a minimum scale of 39.4 micrometers per pixel, both in their moist and dry states. A custom test bench employing 13x9 inch white aluminum trays, on which the samples are spread in a thin layer, was used. For samples exceeding a size limit, a coning and quartering method was employed for mass reduction.
null
https://arxiv.org/abs/2506.17469v2
https://arxiv.org/pdf/2506.17469v2.pdf
null
[ "Thomas Plante St-Cyr", "François Duhaime", "Jean-Sébastien Dubé", "Simon Grenier" ]
[]
2025-06-20T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/convergent-and-divergent-connectivity
2506.19266
null
null
Convergent and divergent connectivity patterns of the arcuate fasciculus in macaques and humans
The organization and connectivity of the arcuate fasciculus (AF) in nonhuman primates remain contentious, especially concerning how its anatomy diverges from that of humans. Here, we combined cross-scale single-neuron tracing - using viral-based genetic labeling and fluorescence micro-optical sectioning tomography in macaques (n = 4; age 3 - 11 years) - with whole-brain tractography from 11.7T diffusion MRI. Complemented by spectral embedding analysis of 7.0T MRI in humans, we performed a comparative connectomic analysis of the AF across species. We demonstrate that the macaque AF originates in the temporal-parietal cortex, traverses the auditory cortex and parietal operculum, and projects into prefrontal regions. In contrast, the human AF exhibits greater expansion into the middle temporal gyrus and stronger prefrontal and parietal operculum connectivity - divergences quantified by Kullback-Leibler analysis that likely underpin the evolutionary specialization of human language networks. These interspecies differences - particularly the human AF's broader temporal integration and strengthened frontoparietal linkages - suggest a connectivity-based substrate for the emergence of advanced language processing unique to humans. Furthermore, our findings offer a neuroanatomical framework for understanding AF-related disorders such as aphasia and dyslexia, where aberrant connectivity disrupts language function.
null
https://arxiv.org/abs/2506.19266v1
https://arxiv.org/pdf/2506.19266v1.pdf
null
[ "Jiahao Huang", "Ruifeng Li", "Wenwen Yu", "Anan Li", "Xiangning Li", "Mingchao Yan", "Lei Xie", "Qingrun Zeng", "Xueyan Jia", "Shuxin Wang", "Ronghui Ju", "Feng Chen", "Qingming Luo", "Hui Gong", "Xiaoquan Yang", "Yuanjing Feng", "Zheng Wang" ]
[ "Anatomy", "Diffusion MRI" ]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/quantitative-benchmarking-of-anomaly
2506.19234
null
null
Quantitative Benchmarking of Anomaly Detection Methods in Digital Pathology
Anomaly detection has been widely studied in the context of industrial defect inspection, with numerous methods developed to tackle a range of challenges. In digital pathology, anomaly detection holds significant potential for applications such as rare disease identification, artifact detection, and biomarker discovery. However, the unique characteristics of pathology images, such as their large size, multi-scale structures, stain variability, and repetitive patterns, introduce new challenges that current anomaly detection algorithms struggle to address. In this quantitative study, we benchmark over 20 classical and prevalent anomaly detection methods through extensive experiments. We curated five digital pathology datasets, both real and synthetic, to systematically evaluate these approaches. Our experiments investigate the influence of image scale, anomaly pattern types, and training epoch selection strategies on detection performance. The results provide a detailed comparison of each method's strengths and limitations, establishing a comprehensive benchmark to guide future research in anomaly detection for digital pathology images.
null
https://arxiv.org/abs/2506.19234v1
https://arxiv.org/pdf/2506.19234v1.pdf
null
[ "Can Cui", "Xindong Zheng", "Ruining Deng", "Quan Liu", "Tianyuan Yao", "Keith T Wilson", "Lori A Coburn", "Bennett A Landman", "Haichun Yang", "Yaohong Wang", "Yuankai Huo" ]
[ "Anomaly Detection", "Artifact Detection", "Benchmarking" ]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/deformable-medical-image-registration-with
2506.19222
null
null
Deformable Medical Image Registration with Effective Anatomical Structure Representation and Divide-and-Conquer Network
Effective representation of Regions of Interest (ROI) and independent alignment of these ROIs can significantly enhance the performance of deformable medical image registration (DMIR). However, current learning-based DMIR methods have limitations. Unsupervised techniques disregard ROI representation and proceed directly with aligning pairs of images, while weakly-supervised methods heavily depend on label constraints to facilitate registration. To address these issues, we introduce a novel ROI-based registration approach named EASR-DCN. Our method represents medical images through effective ROIs and achieves independent alignment of these ROIs without requiring labels. Specifically, we first used a Gaussian mixture model for intensity analysis to represent images using multiple effective ROIs with distinct intensities. Furthermore, we propose a novel Divide-and-Conquer Network (DCN) to process these ROIs through separate channels to learn feature alignments for each ROI. The resultant correspondences are seamlessly integrated to generate a comprehensive displacement vector field. Extensive experiments were performed on three MRI and one CT datasets to showcase the superior accuracy and deformation reduction efficacy of our EASR-DCN. Compared to VoxelMorph, our EASR-DCN achieved improvements of 10.31\% in the Dice score for brain MRI, 13.01\% for cardiac MRI, and 5.75\% for hippocampus MRI, highlighting its promising potential for clinical applications. The code for this work will be released upon acceptance of the paper.
null
https://arxiv.org/abs/2506.19222v1
https://arxiv.org/pdf/2506.19222v1.pdf
null
[ "Xinke Ma", "Yongsheng Pan", "Qingjie Zeng", "Mengkang Lu", "Bolysbek Murat Yerzhanuly", "Bazargul Matkerim", "Yong Xia" ]
[ "Deformable Medical Image Registration", "Hippocampus", "Image Registration", "Medical Image Registration" ]
2025-06-24T00:00:00
null
null
null
null
[]