text
stringlengths
0
5.58k
The Harker School
San Jose, CA 95129
Abstract
Textile dyes comprise 20% of global water pollution. Mycoremediation, a promis-
ing approach utilizing cheap, naturally growing fungi, has not seen scale production.
While numerous studies indicate benefits, it is challenging to apply the specific
learnings of each study to the combination of environmental factors present in a
given physical site - a gap we believe machine learning can help fill if datasets
become available. We propose an approach to drive machine learning research
in mycoremediation by contributing a comprehensive dataset. We propose us-
ing advanced language models and vision transformers to extract and categorize
experimental data from various research papers. This dataset will enable ML-
driven innovation in matching fungi to specific dye types, optimizing remediation
processes, and scaling up mycoremediation efforts effectively.
1 Introduction
Textile manufacturing is one of the world’s greatest environmental polluters [1]. Textile dyes are
responsible for 20% of global water pollution [2,3], with the relative damage growing daily to
the finite freshwater on our planet. Furthermore, textile dyes in water have polluted agricultural
areas and caused significant health damage to humans, animals, and plants [5]. Many techniques
exist for processing textile dye effluent. However each method has positive and negative elements.
For example, bioabsorption generates new forms of waste that need to be incinerated, utilized, or
reprocessed [4]. A promising technique is mycoremediation, where natural fungal materials are
used to break down the chemical structure of dyes into constituent components of CO2 and water.
Mycoremediation has many potential advantages, including the ability to grow the substrate at low
cost, generally understood positive interactions with soils, and the ability to degrade specific dye
types. However, while substantial research exists on mycoremediation, few scale implementations
exist [5]. While not conclusive, a recent review of patents in the field also indicates that there has not
been a significant shift from research to production [5]. Known challenges include the fact that while
many point solutions exist, each experiment is sufficiently unique, so it is challenging to generalize to
a new case and feel confident about the specific method that should be used.
Figure 1 shows a simple example of the importance of process to decolorization efficacy. 150 mL
of dye effluent was prepared by mixing 20 g of Rit Dye [17] in one liter of distilled water. One cup
of trametes versicolor fungi was added and the combination was placed on a shaking table. After 2
weeks, the fungi were filtered out and another cup of fresh tramates versicolor was added. After 2
more weeks, the color of the resulting solution is measured via spectroscopy. A second experiment
uses the same dye concentration, fungal mass, timeframe and agitation, but in this case placed all
Tackling Climate Change with Machine Learning: workshop at NeurIPS 2024.
Figure 1: A Mycoremediation Experiment. The figures, left to right, show the experiment testbed,
color change from start (leftmost), single cycle experiment (middle) and 2 cycle experiment (right).
The final figure shows the spectrometry graph, indicating that while both experiments show value,
the second approach gets much closer to distilled water
the fungi in the solution at the start and left it for four weeks. The decolorization levels are notably
different.
This appears to be a problem to which machine learning can add value. The fundamental chemistry of
mycoremediation, particularly for dyes, is known. However, the precise results are heavily dependent
on environmental factors. Machine learning can discover the patterns within these relationships. There
are more than 10,000 types of dye [7] and hundreds of strains of available fungi with mycoremediation
potential [6], making it challenging to create simple models that can match fungi and dye. The
challenge for applying machine learning is the lack of datasets. To our knowledge, no large-scale
datasets exist for mycoremediation processes for dye treatment.
2 Methodology
Figure 2 describes our proposed methodology. We employ a web crawler to search for published
research at the intersection of mycoremediation and dyes. Any publicly accessible PDF files are
processed via a data processing pipeline to extract experiments contributed by each paper. The first
step is to select whether the paper contributes unique experiments or is a review article. If it is the
former, the PDF is processed in a number of ways (see Figure 2). with the goal of extracting one
row of information for every unique experiment in the paper. By manually examining literature on
dye mycoremediation, we have determined that the key factors affecting the performance of dye
decolorization (besides the specific fungi and dye) are temperature, pH, agitation (shaking or stirring),
timeframe, dye concentration, and fungal mass per unit of dye volume. The decolorization efficacy is
often measured by color change spectroscopy and reported as a percentage improvement. Therefore,
the pipeline attempts to extract each of these values for every unique experiment reported in every
paper.
From an experimental standpoint, we intend to conduct an ablation study to explore the sensitivity of
extraction effectiveness to different pipeline techniques. Figure 2 shows our planned study where each
PDF is processed by using text extraction, segmented into pages with text-based retrieval augmented
generation [9], page selection performed by a vision transformer, or fed directly into a large language
model. The cross-sensitivity to the LLM itself will also be measured by testing several state-of-the-art
Large Language Models (Lllama, GPT-4o, Gemini, and Claude). Effectiveness and correctness will
be measured on a holdout set of research papers manually annotated for the correct experiments and
then compared with the pipeline’s extracted values. Measures will be reported of how many of the
correct experiments were identified and missed, how many extraneous experiments were added, and
the feature level correctness of all the correctly reported experiment rows. We intend to leverage a
number of open source vision transformers and text processing methods in our study [18, 19, 20, 21,
23,24,25,26] as well as python processing tools [22].
3 Initial Feasibility Indicators
Our work to date has demonstrated several positive indicators for the feasibility of our approach. A
crawler implemented to do a breadth-first search with deduplication, starting from a single recent
mycoremediation paper, could access over 2000 relevant papers on the public internet in 24 hours,
of which about 100 papers were found to be pertinent to our purpose. While this does not speak to
2
the total volume present and freely accessible, it is a positive indicator. Prior bibliometric research
indicates that over 8000 research papers were published on textile dye treatment between 1990
and 2022 [16]. A sample analysis of the paper in [13] yields promise but also highlights the need
for a comprehensive evaluation of text extraction approaches. [13] is a recent (2024) study of the
mycoremediation efficacy of three fungal variants on five dye types. Each experiment generates seven
results (one per day) for a total of 105 experiments. A straightforward query of GPT-4o delivered 15
experiments (the results of the 7-day outcome, all of which were correct in the columnar details), but
could not extract the intermediate results, which were present in graphs in the paper. A tuned prompt
requesting intermediate results returned 30 experiments, where some dye experiments were reported
for multiple days and others for only one day. Of these, all but two were correct in all columnar
details, providing preliminary indicators that our approach shows promise but detailed study and
validation is needed to find the best extraction technique.
4 Pathway to Impact
We envision this dataset being used similarly to how [14,15] are used in the drug discovery process.
These two datasets, both derived via NLP applied to public sources, has generated substantial
innovation in their domain. We intend to use the methodology above and publish the method (and all