A collection of maintenance scripts for the Sciveyor database.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
Charles Pence 96a870b24d
Separate the check-dir from the dest-dir.
3 months ago
check Fix the unique/directory generation. 3 months ago
fetch Start incorporating passing around a Proc for logging. 4 months ago
lib Fix the URL-cleaning function (yikes, that was ugly). 3 months ago
move Separate the check-dir from the dest-dir. 3 months ago
ocr Run OCR in parallel (4 threads/Tesseract). 4 months ago
old I believe both of these work, but I'm leaving them here until I test them. 2 years ago
parse Sprinkle around a few more '.strip' to deal with ugly data. 3 months ago
util Add a tiny utility to print the value of an XML query. 2 years ago
xslt Add a whole heck of a lot more DTDs for the PMC OA subset. 8 years ago
.env.example Scrub API keys from current files, add dotenv. 2 years ago
.gitignore Scrub API keys from current files, add dotenv. 2 years ago
COPYING Add explicit license (was already stated in README). 9 years ago
Gemfile Run OCR in parallel (4 threads/Tesseract). 4 months ago
Gemfile.lock Merge branch 'main' of codeberg.org:sciveyor/scripts 4 months ago
README.md Update README. 3 months ago

README.md

Sciveyor Scripts

This repository contains a set of scripts used for maintaining the documents in the Sciveyor article database. They may or may not work for you, be useful, or explode.

Some of the scripts are documented in this README file, and others are not. We're aiming to improve the status of this documentation, but make no guarantees.

Contents

check/folder_extensions

In our work, we often produce multiple versions of the same file -- for instance, a.pdf might be transformed into a.txt and a.json, and then supplemented by a.pubmed.xml and a.crossref.json. This script will walk through the current directory and all sub-directories recursively, looking at every basename, ensuring that for each one, each of the provided file extensions can be found. If not, all of the versions available will be moved into a sub-folder of their current location called orphans.

Usage

check/folder_extensions [list,of,extensions]

The only parameter is a comma-separated list of file extensions to check. (These may be provided with or without a dot.) Files will be left alone if all of the given extensions are found, otherwise they will be moved.

Example:

In a folder containing:

  • a.pdf
  • a.xml
  • a.txt
  • b.pdf
  • c.xml

running:

check/folder_extensions xml,pdf,txt

will produce:

  • a.pdf
  • a.xml
  • a.txt
  • orphans
    • b.pdf
    • c.xml

move/change_extension

This script changes one file extension to a different one for all files of that extension found in the paths provided.

Usage

move/change_extension <.oldex> <.newex> <paths>

This will change all files with the extension .oldex under paths to .newex. An error will be printed and the move will be skipped if the destination file already exists. The script will attempt to continue and move any other movable files, however.

Example:

In a folder containing:

  • a.json
  • a.pdf
  • b.json
  • c
    • d.json

running:

move/change_extension .json .backup .

will produce:

  • a.backup
  • a.pdf
  • b.backup
  • c
    • d.backup

move/clean_filenames

Running this script will remove all characters from filenames in the current directory other than a-z, A-Z, 0-9, dash, and underscore, leaving exactly one dotted file extension at the end of the filename.

Note that the assumption of one file extension means that both files that are supposed to have no extension at all and files with double-dotted extensions (like file.pubmed.xml, where .pubmed.xml is supposed to be "the file extension") will produce unexpected behavior.

move/in_hashed_directories

Directories that are too large tend to upset operating systems. Over about 10,000 files (at least in our testing), network shares and even basic local commands like ls stop being very responsive. This script is designed to fix this in a way that still allows for one to quickly determine if a file is present on disk or not. It will take a number of files with names like journal-article-10_2307_1689205.xml, journal-article-10_2307_382953.xml, etc., and file them in folders corresponding to parts of the filename. For instance, the example files above could be placed in:

  • journal-article-10_2307_1689205.xml1/6/8/journal-article-10_2307_1689205.xml
  • journal-article-10_2307_382953.xml3/8/journal-article-10_2307_382953.xml

where here, the folder names have been extracted from the first "variable" parts of the filenames (1682905 and 382953, respectively).

The script, then, works through the directories given and moves files into the output directory. If the output directory passes a given threshold size, it is split along the first non-ignored character. The process repeats, further subdividing folders as needed until all files have been moved.

With files stored in this way, one can write a quick algorithm for determining whether or not a file is present on disk. Start looking at the variable characters in the filename, walk down the folders present on disk, until you run out, and then look for the presence or absence of the file for which you're searching.

Usage

in_hashed_directories [--max-files NUM] [--ignore-chars NUM] [--output DIR] --main-extension [EXT] [directories to search]

  • -mNUM, --max-files NUM: Control the maximum number of files that the script will allow within a given folder before splitting it. It defaults to 10,000.
  • -iNUM, --ignore-chars NUM: Ignore a given number of characters as "constant" at the beginning of every filename, before looking for "splitting" characters. It defaults to zero, skipping no characters.
  • -xEXT, --main-extension EXT: The script will look for all files which share the same basename and move them all at once (that is, a.xml, a.pdf, and a.txt will always wind up in the same folder). This parameter tells the script which extension should be the "primary" one to search. It defaults to .xml.
  • -oDIR, --output DIR: The output directory which will be the root of the hashed directory tree. Defaults to ..
  • Then pass a list of directories to search. Files will be moved into subdirectories of the output directory.

Example:

in_hashed_directories --max-files 5000 --ignore-chars 24 --main-extension .xml --output ~/FilesHashed ~/Files

Move all files in ~/Files to hashed subdirectories of ~/FilesHashed, letting no directory grow larger than 5,000 files, and ignoring the first 24 characters of every filename (in the example above, journal-article-10_2307_).

ocr/ocr

This is our master script for converting PDF files to plain text. There's a lot of decisions that have been made in the construction of this script, so it's worth it to spend some time detailing why we've done what we have.

First: this script does not, under any circumstances, extract native digital text from PDFs. This may seem like a surprising choice. Why not use that included text if it's available to us? Unfortunately, it suffers from two general problems. First, it strongly tends to arrive in the wrong order. As you know if you've tried to copy and paste from a PDF, often text blocks connect text together in nonsensical ways. Converting and passing through OCR does a better job detecting page layout. Second, font problems are rampant. Technically, no glyph displayed in a PDF has to have any connection to any Unicode character whatsoever; we rely on the accuracy of the conversion tables in each PDF to do the job. For many publishers, especially older PDF files, those tables just don't work. It's more reliable, in general, to rasterize to images and then OCR.

Second: we rasterize all PDFs at 600 DPI (higher than usual to offer some cushion for PDF files that have broken physical size information), and in greyscale. This seems reasonably optimal for Tesseract 5.0, our OCR system.

Third: we've chosen Tesseract 5 (currently alpha-20210401) after some testing on various ages of published materials. It provides output at least as good as that of ABBYY 10, the OCR system that we were using initially for the evoText project, and it is an open source solution, which makes long-term maintenance easier. The new LSTM neural-network OCR engine in Tesseract 4+, plus the tessdata_best models trained by Google, has accuracy on par with any extant OCR system that we are aware of.

Finally: we run cleanup scripts on all OCR text files that Tesseract generates after it finishes. Currently there is only one such script:

  • ocr/fix_hyphenation: Tesseract does not often merge hyphenated words occurring at the ends of lines. To solve this problem, we scan each line of the text file, looking for lines that end either with an ASCII dash or a Unicode hyphen character. If we find one, we check each of the two partial words at the end of that line and the start of the next, and the merged word created by concatenating them, against a spelling dictionary. If the merged word is a correct spelling, we accept it. If it is not, but both of the words on either side of the hyphen are, we create a hyphenated word (e.g., a line ending with 'drug-' and a line beginning with 'free' will produce 'drug-free'). If neither word is found in the dictionary, the merged word is used (assuming the presence of a technical term). Note that this script requires that the user have installed aspell and its English-language dictionary data.

Usage

ocr pdf.pdf out.txt

This script simply expects to be passed two filenames, first the PDF to convert and second the text file to be created.

ocr/ocr_multiple

This script simply OCRs all PDF files present in the command line (including recursively in any subdirectories) to text files in the same location (with the .pdf extension changed to .txt). Files will be skipped if the output is already present.

Usage

ocr_multiple folder_1 folder_2 3.pdf

License

All scripts here, unless otherwise specified, are released under the Creative Commons CC0 license, making them as far as possible public domain content in every local jurisdiction. Some scripts will have other licensing information, which will be indicated at the top of the file.