This repo contains training data and models for Layout analysis and text recognition for 17th c. French prints
- This repo is an updated version of the OCR17 repo. It uses XML files and not .png/.txt pairs.
The old repo is still available here.
Training data is organised per print:
Balzac1624_Lettres_btv1b86262420_corrected
Boyer1697_Meduse_cb30152139c_corrected
- …
To train a model, all the data needs to added to a single file, prior to the repartition between train, validation and test. To do so:
git clone https://github.com/e-ditiones/OCR17plus
cd datasetsOCRSegmenter17
bash build_train_alto_Seg17.sh
creates atrainingDataSeg17
directorypython train_val_prep.py ./trainingDataSeg17/*.xml
creates two new filestrain.txt
(with training data) andval.txt
(validation data).- If you have kraken installed, you can use
ketos segtrain -t train.txt -e val.txt -o model -d cuda -f alto -q early -bl
to train a model for layout analysis
The test.txt
file is already prepared for the reproducibility of the test, and evaluate the improvement over time. It was created with 3 title pages, 14 pages containing damage, 2 pages with margin, 14 with decoration, 19 with rubric or signatures (or both), 1 with a running title on bottom of page, 3 pages with decorated drop capitals, 7 with basic drop capitals and 28 basic pages. This test file can also be used for an HTR training test.
The structure of the repo is the following:
├── Data
│ ├── Print_1
│ │ ├── alto4eScriptorium
│ │ ├── pageXmlTranskribus
│ │ ├── pagexmlTranskribusCorrected
│ │ └── png
│ ├── Print_2
│ │ ├── alto4eScriptorium
│ │ ├── pageXmlTranskribus
│ │ ├── pagexmlTranskribusCorrected
│ │ └── png
│ └── …
├── Models
| ├── HTR
| | ├── bleu.mlmodel
| | ├── cheddar.mldmodel
| | ├── dentduchat.mldmodel
| | └── README.md
| └── Segment
| ├── appenzeller.mlmodel
| └── README.md
├── build_train_alto_Seg17.sh
├── files_informations.csv
├── parts_dataset.csv
├── train_val_prep.py
├── test.txt
├── segmontoAltoValidator.xsd
├── validator_alto.py
└── README.md
The Data
directory contains excerpts of 17th century books, i.e. scans of selected pages and their encoding in
- PageXML
- ALTO-4 files.
Regarding the difference between all these directories, cf. infra, § Data description.
Prints have been selected in the OCR17 repo, and are all described individually in their respective folder.
The Models
directory contains models for:
- HTR
- Layout analysis. The layout analysis is based on the SegmOnto vocabulary.
files_informations.csv
indicates in which file found specific zones.parts_dataset.csv
contains the percentage of each specificity in this dataset.
Validation of the XML data pushed on the repository is made via segmontoAltoValidator
and validator_alto.py
. They comme from HTR-United/cremma-medieval repository.
Some of used data come from the OCR17 repo, the composition of which started with Transkribus, which needs to be adapted for eScriptorium. Therefore, for each print exported from transkribus, we propose
- The exported file (
pageXmlTranskribus
) - The exported file prepared form for eScriptorium (
pagexmlTranskribusCorrected
) - The version exported from eScriptorium (
alto4eScriptorium
)
Data prepared and models trained by Claire Jahan with the help of Simon Gabay, as part of the E-ditiones project.
Claire Jahan : claire.jahan[at]chartes.psl.eu
Simon Gabay : Simon.Gabay[at]unige.ch
Claire Jahan and Simon Gabay, OCR17+ - Layout analysis and text recognition for 17th c. French prints, 2021, Paris/Genève: ENS Paris/UniGE, https://github.com/e-ditiones/OCR17plus.
Data is CC-BY, except images which come from Gallica (cf. conditions d'utilisation).