|Home / Query / WordAlign / Wiki||[ada83] [bible] [bianet] [books] [CAPES] [DGT] [DOGC] [ECB] [EhuHac] [EiTB] [Elhuyar] [ELRC] [EMEA] [EUbooks] [EU] [Europarl] [finlex] [fiskmö] [giga] [GNOME] [GlobalVoices] [hren] [infopankki] [JRC] [JW300] [KDE4/doc] [MBS] [memat] [MontenegrinSubs] [MultiUN] [MultiParaCrawl] [NCv9/v11/v14] [Ofis] [OO/OO3] [subs/16/18] [Opus100] [ParaCrawl] [ParCor] [PHP] [QED] [sardware] [SciELO] [SETIMES] [SPC] [Tatoeba] [Tanzil] [TEP] [TedTalks] [TED] [Tilde] [Ubuntu] [UN] [UNPC] [Wikimedia] [Wikipedia] [WikiSource] [WMT] [XhosaNavy]|
8 languages, 7 bitexts
total number of files: 20
total number of tokens: 1.48G
total number of sentence fragments: 47.54M
Please, acknowledge the gourmet project at https://gourmet-project.eu. This version is derived from the original release at their website adjusted for redistribution via the OPUS corpus collection. Please, acknowledge OPUS as well for this service.
Below you can download data files for all language pairs in different formats and with different kind of annotation (if available). You can click on the various links as explained below. In addition to the files shown on this webpage, OPUS also provides pre-compiled word alignments and phrase tables, bilingual dictionaries, frequency counts, and these files can be found through the resources search form on the top-level website of OPUS.
License: Creative Commons CC0 license
Copyright: All content is made publicly available through a Creative Commons CC0 license.
|Bottom-left triangle: download files||Upper-right triangle: sample files |
Upper-right triangle: download translation memory files (TMX)
Bottom-left triangle: download plain text files (MOSES/GIZA++)
Language ID's, first row: monolingual plain text files (tokenized)
Language ID's, first column: monolingual plain text files (untokenized)
Note that TMX files only contain unique translation units and, therefore, the number of aligned units is smaller than for the distributions in Moses and XML format. Moses downloads include all non-empty alignment units including duplicates. Token counts for each language also include duplicate sentences and documents.