\n\n)\ndef segment(chapter: lxml.etree._Element) -> Dict[str, str]:\n segments = {} # this will be returned\n t = [] # this is a buffer\n chap_label = str(chapter.get(\"n\"))\n sect_label = \"0\"\n for element in chapter.iter():\n if element.get(\"unit\")==\"number\":\n # milestone: fill and close the previous segment:\n label = chap_label + \"_\" + sect_label\n segments[label] = \" \".join(t)\n # reset buffer\n t = []\n # if there is text after the milestone,\n # add it as first content to the buffer\n if element.tail:\n t.append(\" \".join(str.replace(element.tail, \"\\n\", \" \").strip().split()))\n # prepare for next labelmaking\n sect_label = str(element.get(\"n\"))\n else:\n if element.text:\n t.append(\" \".join(str.replace(element.text, \"\\n\", \" \").strip().split()))\n if element.tail:\n t.append(\" \".join(str.replace(element.tail, \"\\n\", \" \").strip().split()))\n # all elements are processed,\n # add text remainder/current text buffer content\n label = chap_label + \"_\" + sect_label\n segments[label] = \" \".join(t)\n return segments\nnsmap = {\"tei\": \"http://www.tei-c.org/ns/1.0\"}\nxp_divs = etree.XPath(\"(//tei:body/tei:div)\", namespaces = nsmap)\nsegmented = {}\ndivs = xp_divs(document)\nsegments = (segment(div) for div in divs)\nfor d in segments:\n print(d)\ndocument=etree.fromstring(\n
\n## Introduction\nThis file is the continuation of preceding work. Previously, I have worked my way through a couple of text-analysing approaches - such as tf/idf frequencies, n-grams and the like - in the context of a project concerned with Juan de Solórzano Pereira's *Politica Indiana*. This can be seen [here](TextProcessing_Solorzano.ipynb).\nIn the former context, I got somewhat stuck when I was trying to automatically align corresponding passages of two editions of the same work ... where the one edition would be a **translation** of the other and thus we would have two different languages. In vector terminology, two languages means two almost orthogonal vectors and it makes little sense to search for similarities there.\nThe present file takes this up, tries to refine an approach taken there and to find alternative ways of analysing a text across several languages. This time, the work concerned is Martín de Azpilcueta's *Manual de confesores*, a work of the 16th century that has seen very many editions and translations, quite a few of them even by the work's original author and it is the subject of the research project [\"Martín de Azpilcueta’s Manual for Confessors and the Phenomenon of Epitomisation\"](http://www.rg.mpg.de/research/martin-de-azpilcuetas-manual-for-confessors) by Manuela Bragagnolo. \n(There are a few DH-ey things about the project that are not directly of concern here, like a synoptic display of several editions or the presentation of the divergence of many actual translations of a given term. Such aspects are being treated with other software, like [HyperMachiavel](http://hyperprince.ens-lyon.fr/hypermachiavel) or [Lera](http://lera.uzi.uni-halle.de/).)\nAs in the previous case, the programming language used in the following examples is \"python\" and the tool used to get prose discussion and code samples together is called [\"jupyter\"](http://jupyter.org/). (A common way of installing both the language and the jupyter software, especially in windows, is by installing a python \"distribution\" like [Anaconda](https://www.anaconda.com/what-is-anaconda/).) In jupyter, you have a \"notebook\" that you can populate with text (if you want to use it, jupyter understands [markdown](http://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Working%20With%20Markdown%20Cells.html) code formatting) or code, and a program that pipes a nice rendering of the notebook to a web browser as you are reading right now. In many places in such a notebook, the output that the code samples produce is printed right below the code itself. Sometimes this can be quite a lot of output and depending on your viewing environment you might have to scroll quite some way to get to the continuation of the discussion.\nYou can save your notebook online (the current one is [here at github](https://github.com/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Azpilcueta.ipynb)) and there is an online service, nbviewer, able to render any notebook that it can access online. So chances are you are reading this present notebook at the web address [https://nbviewer.jupyter.org/github/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Azpilcueta.ipynb](https://nbviewer.jupyter.org/github/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Azpilcueta.ipynb).\nA final word about the elements of this notebook:\n
\n
\n# Preparations\nEnd of explanation\nsourcePath = 'DHd2019/cap6_align_-_2018-01.csv'\nExplanation: Unlike in the previous case, where we had word files that we could export as plaintext, in this case Manuela has prepared a sample chapter with four editions transcribed in parallel in an office spreadsheet. So we first of all make sure that we have good UTF-8 comma-separated-value files, e.g. by uploading a csv export of our office program of choice to a CSV Linting service. (As a side remark, in my case, exporting with LibreOffice provided me with options to select UTF-8 encoding and choose the field delimiter and resulted in a valid csv file. MS Excel did neither of those.) Below, we expect the file at the following position:\nEnd of explanation\nimport csv\nsourceFile = open(sourcePath, newline='', encoding='utf-8')\nsourceTable = csv.reader(sourceFile)\nExplanation: Then, we can go ahead and open the file in python's csv reader:\nEnd of explanation\nimport re\n# Initialize a list of lists, or two-dimensional list ...\nEditions = [[]]\n# ...with four sub-lists 0 to 3\nfor i in range(3):\n a = []\n Editions.append(a)\n# Now populate it from our sourceTable\nsourceFile.seek(0) # in repeated runs, restart from the beginning of the file\nfor row in sourceTable:\n for i, field in enumerate(row): # We normalize quite a bit here already:\n p = field.replace('¶', ' ¶ ') # spaces around ¶ \n p = re.sub(\"&([^c])\",\" & \\\\1\", p) # always spaces around &, except for &c\n p = re.sub(\"([,.:?/])(\\S)\",\"\\\\1 \\\\2\", p) # always a space after ',.:?/'\n p = re.sub(\"([0-9])([a-zA-Z])\", \"\\\\1 \\\\2\", p) # always a space between numbers and word characters\n p = re.sub(\"([a-z]) ?\\\\(\\\\1\\\\b\", \" (\\\\1\", p) # if a letter is repeated on its own in a bracketed\n # expression it's a note and we eliminate the character\n # from the preceding word\n p = \" \".join(p.split()) # always only one space\n Editions[i].append(p)\nprint(str(len(Editions[0])) + \" rows read.\\n\")\n# As an example, see the first seven sections of the third edition (1556 SPA):\nfor field in range(len(Editions[2])):\n print(Editions[2][field])\nExplanation: And next, we read each line into new elements of four respective lists (since we're dealing with one sample chapter, we try to handle it all in memory first and see if we run into problems):\n(Note here and in the following that in most cases, when the program is counting, it does so beginning with zero. Which means that if we end up with 20 segments, they are going to be called segment 0, segment 1, ..., segment 19. There is not going to be a segment bearing the number twenty, although we do have twenty segments. The first one has the number zero and the twentieth one has the number nineteen. Even for more experienced coders, this sometimes leads to mistakes, called \"off-by-one errors\".)\nEnd of explanation\nnumOfEds = 4\nlanguage = [\"PT\", \"PT\", \"ES\", \"LA\"] # I am using language codes that later on can be used in babelnet\nyear = [1549, 1552, 1556, 1573]\nExplanation: Actually, let's define two more list variables to hold information about the different editions - language and year of print:\nEnd of explanation\nlemma = [{} for i in range(numOfEds)]\n# lemma = {} # we build a so-called dictionary for the lookups\nfor i in range(numOfEds):\n \n wordfile_path = 'Azpilcueta/wordforms-' + language[i].lower() + '.txt'\n # open the wordfile (defined above) for reading\n wordfile = open(wordfile_path, encoding='utf-8')\n tempdict = []\n for line in wordfile.readlines():\n tempdict.append(tuple(line.split('>'))) # we split each line by \">\" and append\n # a tuple to a temporary list.\n lemma[i] = {k.strip(): v.strip() for k, v in tempdict} # for every tuple in the temp. list,\n # we strip whitespace and make a key-value\n # pair, appending it to our \"lemma\"\n # dictionary\n wordfile.close\n print(str(len(lemma[i])) + ' ' + language[i] + ' wordforms known to the system.')\nExplanation: TF/IDF \nIn the previous (i.e. Solórzano) analyses, things like tokenization, lemmatization and stop-word lists filtering are explained step by step. Here, we rely on what we have found there and feed it all into functions that are ready-made and available in suitable libraries...\nFirst, we build our lemmatization resource and \"function\":\nEnd of explanation\nlemma[language.index(\"PT\")]['diremos']\nExplanation: Again, a quick test: Let's see with which \"lemma\"/basic word the particular wordform \"diremos\" is associated, or, in other words, what value our lemma variable returns when we query for the key \"diremos\":\nEnd of explanation\nstopwords = []\nfor i in range(numOfEds):\n \n stopwords_path = 'DHd2019/stopwords-' + language[i].lower() + '.txt'\n stopwords.append(open(stopwords_path, encoding='utf-8').read().splitlines())\n print(str(len(stopwords[i])) + ' ' + language[i]\n + ' stopwords known to the system, e.g.: ' + str(stopwords[i][100:119]) + '\\n')\nExplanation: And we are going to need the stopwords lists:\nEnd of explanation\nabbreviations = [] # As of now, this is one for all languages :-(\nabbrs_path = 'DHd2019/abbreviations.txt'\nabbreviations = open(abbrs_path, encoding='utf-8').read().splitlines()\nprint(str(len(abbreviations)) + ' abbreviations known to the system, e.g.: ' + str(abbreviations[100:119]))\nExplanation: (In contrast to simpler numbers that have been filtered out by the stopwords filter, I have left numbers representing years like \"1610\" in place.)\nAnd, later on when we try sentence segmentation, we are going to need the list of abbreviations - words where a subsequent period not necessarily means a new sentence:\nEnd of explanation\nimport re\nimport pandas as pd\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nnumTopTerms = 20\n# So first we build a tokenising and lemmatising function (per language) to work as\n# an input filter to the CountVectorizer function\ndef ourLaLemmatiser(str_input):\n wordforms = re.split('\\W+', str_input)\n return [lemma[language.index(\"LA\")][wordform].lower().strip() if wordform in lemma[language.index(\"LA\")] else wordform.lower().strip() for wordform in wordforms ]\ndef ourEsLemmatiser(str_input):\n wordforms = re.split('\\W+', str_input)\n return [lemma[language.index(\"ES\")][wordform].lower().strip() if wordform in lemma[language.index(\"ES\")] else wordform.lower().strip() for wordform in wordforms ]\ndef ourPtLemmatiser(str_input):\n wordforms = re.split('\\W+', str_input)\n return [lemma[language.index(\"PT\")][wordform].lower().strip() if wordform in lemma[language.index(\"PT\")] else wordform.lower().strip() for wordform in wordforms ]\ndef ourLemmatiser(lang):\n if (lang == \"LA\"):\n return ourLaLemmatiser\n if (lang == \"ES\"):\n return ourEsLemmatiser\n if (lang == \"PT\"):\n return ourPtLemmatiser\ndef ourStopwords(lang):\n if (lang == \"LA\"):\n return stopwords[language.index(\"LA\")]\n if (lang == \"ES\"):\n return stopwords[language.index(\"ES\")]\n if (lang == \"PT\"):\n return stopwords[language.index(\"PT\")]\ntopTerms = []\nfor i in range(numOfEds):\n topTermsEd = []\n # Initialize the library's function, specifying our\n # tokenizing function from above and our stopwords list.\n tfidf_vectorizer = TfidfVectorizer(stop_words=ourStopwords(language[i]), use_idf=True, tokenizer=ourLemmatiser(language[i]), norm='l2')\n # Finally, we feed our corpus to the function to build a new \"tfidf_matrix\" object\n tfidf_matrix = tfidf_vectorizer.fit_transform(Editions[i])\n # convert your matrix to an array to loop over it\n mx_array = tfidf_matrix.toarray()\n # get your feature names\n fn = tfidf_vectorizer.get_feature_names()\n # now loop through all segments and get the respective top n words.\n pos = 0\n for j in mx_array:\n # We have empty segments, i.e. none of the words in our vocabulary has any tf/idf score > 0\n if (j.max() == 0):\n topTermsEd.append([(\"\", 0)])\n # otherwise append (present) lemmatised words until numTopTerms or the number of words (-stopwords) is reached\n else:\n topTermsEd.append(\n [(fn[x], j[x]) for x in ((j*-1).argsort()) if j[x] > 0] \\\n [:min(numTopTerms, len(\n [word for word in re.split('\\W+', Editions[i][pos]) if ourLemmatiser(language[i])(word) not in stopwords]\n ))])\n pos += 1\n topTerms.append(topTermsEd)\nExplanation: Next, we should find some very characteristic words for each segment for each edition. (Let's say we are looking for the \"Top 20\".) We should build a vocabulary for each edition individually and only afterwards work towards a common vocabulary of several \"Top n\" sets.\nEnd of explanation\nsegment_no = 18\nExplanation: Translations?\nMaybe there is an approach to inter-lingual comparison after all. After a first unsuccessful try with conceptnet.io, I next want to try Babelnet in order to lookup synonyms, related terms and translations. I still have to study the API...\nFor example, let's take this single segment 19:\nEnd of explanation\nprint(\"Comparing words from segments \" + str(segment_no) + \" ...\")\nprint(\" \")\nprint(\"Here is the segment in the four editions:\")\nprint(\" \")\nfor i in range(numOfEds):\n print(\"Ed. \" + str(i) + \":\")\n print(\"------\")\n print(Editions[i][segment_no])\n print(\" \")\nprint(\" \")\nprint(\" \")\n# Build List of most significant words for a segment\nprint(\"Most significant words in the segment:\")\nprint(\" \")\nfor i in range(numOfEds):\n print(\"Ed. \" + str(i) + \":\")\n print(\"------\")\n print(topTerms[i][segment_no])\n print(\" \")\nExplanation: And then first let's see how this segment compares in the different editions:\nEnd of explanation\nstartEd = 1\nsecondEd = 2\nExplanation: Now we look up the \"concepts\" associated to those words in babelnet. Then we look up the concepts associated with the words of the present segment from another edition/language, and see if the concepts are the same.\nBut we have to decide on some particular editions to get things started. Let's take the Spanish and Latin ones:\nEnd of explanation\nimport urllib\nimport json\nfrom collections import defaultdict\nbabelAPIKey = '18546fd3-8999-43db-ac31-dc113506f825'\nbabelGetSynsetIdsURL = \"https://babelnet.io/v5/getSynsetIds?\" + \\\n \"targetLang=LA&targetLang=ES&targetLang=PT\" + \\\n \"&searchLang=\" + language[startEd] + \\\n \"&key=\" + babelAPIKey + \\\n \"&lemma=\"\n# Build lists of possible concepts\ntop_possible_conceptIDs = defaultdict(list)\nfor (word, val) in topTerms[startEd][segment_no]:\n concepts_uri = babelGetSynsetIdsURL + urllib.parse.quote(word)\n response = urllib.request.urlopen(concepts_uri)\n conceptIDs = json.loads(response.read().decode(response.info().get_param('charset') or 'utf-8'))\n for rel in conceptIDs:\n top_possible_conceptIDs[word].append(rel.get(\"id\"))\nprint(\" \")\nprint(\"For each of the '\" + language[startEd] + \"' words, here are possible synsets:\")\nprint(\" \")\nfor word in top_possible_conceptIDs:\n print(word + \":\" + \" \" + ', '.join(c for c in top_possible_conceptIDs[word]))\n print(\" \")\nprint(\" \")\nprint(\" \")\nprint(\" \")\nbabelGetSynsetIdsURL2 = \"https://babelnet.io/v5/getSynsetIds?\" + \\\n \"targetLang=LA&targetLang=ES&targetLang=PT\" + \\\n \"&searchLang=\" + language[secondEd] + \\\n \"&key=\" + babelAPIKey + \\\n \"&lemma=\"\n# Build list of 10 most significant words in the second language\ntop_possible_conceptIDs_2 = defaultdict(list)\nfor (word, val) in topTerms[secondEd][segment_no]:\n concepts_uri = babelGetSynsetIdsURL2 + urllib.parse.quote(word)\n response = urllib.request.urlopen(concepts_uri)\n conceptIDs = json.loads(response.read().decode(response.info().get_param('charset') or 'utf-8'))\n for rel in conceptIDs:\n top_possible_conceptIDs_2[word].append(rel.get(\"id\"))\nprint(\" \")\nprint(\"For each of the '\" + language[secondEd] + \"' words, here are possible synsets:\")\nprint(\" \")\nfor word in top_possible_conceptIDs_2:\n print(word + \":\" + \" \" + ', '.join(c for c in top_possible_conceptIDs_2[word]))\n print(\" \")\n# calculate number of overlapping terms\nvalues_a = set([item for sublist in top_possible_conceptIDs.values() for item in sublist])\nvalues_b = set([item for sublist in top_possible_conceptIDs_2.values() for item in sublist])\noverlaps = values_a & values_b\nprint(\"Overlaps: \" + str(overlaps))\nbabelGetSynsetInfoURL = \"https://babelnet.io/v5/getSynset?key=\" + babelAPIKey + \\\n \"&targetLang=LA&targetLang=ES&targetLang=PT\" + \\\n \"&id=\"\nfor c in overlaps:\n info_uri = babelGetSynsetInfoURL + c\n response = urllib.request.urlopen(info_uri)\n words = json.loads(response.read().decode(response.info().get_param('charset') or 'utf-8'))\n \n senses = words['senses']\n for result in senses[:1]:\n lemma = result['properties'].get('fullLemma')\n resultlang = result['properties'].get('language')\n print(c + \": \" + lemma + \" (\" + resultlang.lower() + \")\")\n# what's left: do a nifty ranking\nExplanation: And then we can continue...\nEnd of explanation\nfrom nltk import sent_tokenize\n## First, train the sentence tokenizer:\nfrom pprint import pprint\nfrom nltk.tokenize.punkt import PunktSentenceTokenizer, PunktLanguageVars, PunktTrainer\n \nclass BulletPointLangVars(PunktLanguageVars):\n sent_end_chars = ('.', '?', ':', '!', '¶')\ntrainer = PunktTrainer()\ntrainer.INCLUDE_ALL_COLLOCS = True\ntokenizer = PunktSentenceTokenizer(trainer.get_params(), lang_vars = BulletPointLangVars())\nfor tok in abbreviations : tokenizer._params.abbrev_types.add(tok)\n## Now we sentence-segmentize all our editions, printing results and saving them to files:\n# folder for the several segment files:\noutputBase = 'Azpilcueta/sentences'\ndest = None\n# Then, sentence-tokenize our segments:\nfor i in range(numOfEds):\n dest = open(outputBase + '_' + str(year[i]) + '.txt',\n encoding='utf-8',\n mode='w')\n print(\"Sentence-split of ed. \" + str(i) + \":\")\n print(\"------\")\n for s in range(0, len(Editions[i])):\n for a in tokenizer.tokenize(Editions[i][s]):\n dest.write(a.strip() + '\\n')\n print(a)\n dest.write('
\\n')\n print('
')\n dest.close()\nExplanation: Actually I think this is somewhat promising - an overlap of four independent, highly meaning-bearing words, or of forty-something related concepts. At first glance, they should be capable of distinguishing this section from all the other ones. However, getting this result was made possible by quite a bit of manual tuning the stopwords and lemmatization dictionaries before, so this work is important and cannot be eliminated.\nNew Approach: Use Aligner from Machine Translation Studies \nIn contrast to what I thought previously, there is a couple of tools for automatically aligning parallel texts after all. After some investigation of the literature, the most promising candidate seems to be HunAlign. However, as this is a commandline tool written in C++ (there is LF Aligner, a GUI, available), it is not possible to run it from within this notebook.\nFirst results were problematic, due to the different literary conventions that our editions follow: Punctuation was used inconsistently (but sentence length is one of the most relevant factors for aligning), as were abbreviations and notes.\nMy current idea is to use this notebook to preprocess the texts and to feed a cleaned up version of them to hunalign...\nComing back to this after a first couple of rounds with Hunalign, I have the feeling that the fact that literary conventions are so divergent probably means that Aligning via sentence lengths is a bad idea in our from the outset. Probably better to approach this with GMA or similar methods. Anyway, here are the first attempts with Hunalign:\nEnd of explanation\n# folder for the several segment files:\noutputBase = 'Azpilcueta/sentences-lemmatized'\ndest = None\n# Then, sentence-tokenize our segments:\nfor i in range(numOfEds):\n dest = open(outputBase + '_' + str(year[i]) + '.txt',\n encoding='utf-8',\n mode='w')\n stp = set(stopwords[i])\n print(\"Cleaned/lemmatized ed. \" + str(i) + \" [\" + language[i] + \"]:\")\n print(\"------\")\n for s in range(len(Editions[i])):\n for a in tokenizer.tokenize(Editions[i][s]):\n dest.write(\" \".join([x for x in ourLemmatiser(language[i])(a) if x not in stp]) + '\\n')\n print(\" \".join([x for x in ourLemmatiser(language[i])(a) if x not in stp]))\n dest.write(' \\n')\n print(' ')\n dest.close()\nExplanation: ... lemmatize/stopwordize it---\nEnd of explanation\nfrom sklearn.metrics.pairwise import cosine_similarity\nsimilarities = pd.DataFrame(cosine_similarity(tfidf_matrix))\nsimilarities[round(similarities, 0) == 1] = 0 # Suppress a document's similarity to itself\nprint(\"Pairwise similarities:\")\nprint(similarities)\nprint(\"The two most similar segments in the corpus are\")\nprint(\"segments\", \\\n similarities[similarities == similarities.values.max()].idxmax(axis=0).idxmax(axis=1), \\\n \"and\", \\\n similarities[similarities == similarities.values.max()].idxmax(axis=0)[ similarities[similarities == similarities.values.max()].idxmax(axis=0).idxmax(axis=1) ].astype(int), \\\n \".\")\nprint(\"They have a similarity score of\")\nprint(similarities.values.max())\nExplanation: With these preparations made, Hunaligning 1552 and 1556 reports \"Quality 0.63417\" for unlemmatized and \"Quality 0.51392\" for lemmatized versions of the texts for its findings which still contain many errors. Removing \":\" from the sentence end marks gives \"Quality 0.517048/0.388377\", but from a first impression with fewer errors. Results can be output in different formats, xls files are here and here.\nSimilarity \nIt seems we could now create another matrix replacing lemmata with concepts and retaining the tf/idf values (so as to keep a weight coefficient to the concepts). Then we should be able to calculate similarity measures across the same concepts...\nThe approach to choose would probably be the \"cosine similarity\" of concept vector spaces. Again, there is a library ready for us to use (but you can find some documentation here, here and here.)\nHowever, this is where I have to take a break now. I will return to here soon...\nEnd of explanation\nfrom wordcloud import WordCloud\nimport matplotlib.pyplot as plt\n# We make tuples of (lemma, tf/idf score) for one of our segments\n# But we have to convert our tf/idf weights to pseudo-frequencies (i.e. integer numbers)\nfrq = [ int(round(x * 100000, 0)) for x in Editions[1][3]]\nfreq = dict(zip(fn, frq))\nwc = WordCloud(background_color=None, mode=\"RGBA\", max_font_size=40, relative_scaling=1).fit_words(freq)\n# Now show/plot the wordcloud\nplt.figure()\nplt.imshow(wc, interpolation=\"bilinear\")\nplt.axis(\"off\")\nplt.show()\nExplanation: \n)\na = [[]]\na.clear()\ndicts = []\nw = []\n# For each segment, create a wordcloud and write it along with label and\n# other information into a new row of the html table\nfor i in range(len(mx_array)):\n # this is like above in the single-segment example...\n a.append([ int(round(x * 100000, 0)) for x in mx_array[i]])\n dicts.append(dict(zip(fn, a[i])))\n w.append(WordCloud(background_color=None, mode=\"RGBA\", \\\n max_font_size=40, min_font_size=10, \\\n max_words=60, relative_scaling=0.8).fit_words(dicts[i]))\n # We write the wordcloud image to a file\n w[i].to_file(outputDir + '/wc_' + str(i) + '.png')\n # Finally we write the column row\n htmlfile.write(\n
\n \n\n \n \n Section {a}: {b} \n
\n
\n length: {c} words\n \n.format(a = str(i), b = label[i], c = len(tokenised[i])))\n# And then we write the end of the html file.\nhtmlfile.write(\n