[
restart
] [
share link
]
select language:
acm
afr
ajp
amh
ang
apc
ara
ara_Latn
arq
ary
arz
asm
ast
awa
aze
aze_Latn
bak
bam
ban
bel
ben
ber
bho
bod
bos
bos_Latn
bre
bug
bul
cat
cbk
ceb
ces
cha
chm
chv
ckb
cmn_Hans
cmn_Hant
cor
crh
cym
dan
deu
dsb
dtp
dyu
dzo
egl
ell
eng
epo
est
eus
ewe
fao
fas
fij
fil
fin
fkv
fra
frr
fry
fuc
ful
fur
fuv
gcf
gla
gle
glg
gos
got
grc
grn
gsw
guj
hat
hau
hbs
heb
hin
hoc
hoc_Latn
hrv
hrx
hsb
hun
hye
ibo
ido
ido_Latn
ile
ile_Latn
ilo
ina
ina_Latn
ind
isl
ita
jav
jbo
jbo_Latn
jpn
jpn_Hani
jpn_Hira
jpn_Kana
kab
kac
kan
kat
kaz
kaz_Cyrl
kea
kha
khm
kin
kir
kmr
kor
kor_Hang
kur
kur_Latn
lad
lad_Latn
lao
lat
lat_Latn
lav
lfn_Cyrl
lfn
lfn_Latn
lij
lim
lin
lit
liv
lmo
ltg
ltz
lug
luo
lus
mai
mal
mar
mkd
mlg
mlt
mni
mon
mri
msa
mya
nds
nep
nld
nno
nob
nor
nov
npi
nso
nst
nya
oci
orm
orv
ota_Arab
ota
ota_Latn
pag
pam
pan
pap
pcd
pes
plt
pmn
pms
pol
por
prg
prs
pus
rom
ron
run
rus
sag
san
sat
scn
shn
sin
slk
slv
smo
sna
snd
som
spa
sqi
srd
srp
srp_Cyrl
srp_Latn
sun
swa
swe
swg
swh
szl
tah
tam
tat
tel
tgk
tgl
tha
tir
tlh
tlh_Latn
toki
toki_Latn
ton
tpi
tso
tuk
tuk_Latn
tur
tzl
tzl_Latn
uig_Arab
uig
ukr
umb
urd
uzb
uzb_Latn
vec
vie
vol
war
wol
wuu
xal
xho
yid
yor
yue
yue_Hans
yue_Hant
zho
zho_Hans
zho_Hant
zsm
zsm_Latn
zul
zza
afr
ajp
apc
ara
ary
arz
asm
ast
awa
aze_Latn
bak
bel
ben
ber
bre
bul
cat
ceb
ces
cmn_Hans
cmn_Hant
cor
crh
cym
dan
deu
ell
eng
epo
est
eus
fao
fij
fil
fin
fra
fur
gcf
gla
gle
glg
guj
hat
hau
hbs
heb
hin
hrv
hun
hye
ibo
ido
ile
ilo
ind
isl
ita
jbo
jpn
jpn_Hani
jpn_Hira
kab
kat
kaz
lat
lav
lfn_Latn
lij
lim
lin
lit
lmo
ltg
ltz
lug
mar
mkd
mlt
mri
msa
nds
nld
nno
nob
nor
npi
nso
nya
oci
pag
pap
pcd
pol
por
pus
ron
run
rus
scn
slk
slv
smo
sna
spa
srd
srp_Cyrl
srp_Latn
swe
tah
tat
tgl
tlh
tlh_Latn
toki
toki_Latn
ton
tpi
tuk
tur
uig
uig_Arab
ukr
urd
vec
vie
war
wuu
xho
yid
yor
yue
zsm
[
swap
] [
compare scores
] [
compare models
] [
map
] [
release history
] [
uploads
]
OPUS-MT Dashboard
Language pair:
[
fra-eng
] all languages
Models:
[
all models
] [
OPUS-MT
] [
external
] [
compare
]
Selected:
Tatoeba-MT-models/eng-gla/opus+bt-2021-03-09
Benchmark:
all benchmarks [
download
]
Evaluation metric:
bleu [
spbleu
][
chrf
][
chrf++
][
comet
]
Chart Type:
[barchart] [
heatmap
]
orange = OPUS-MT, blue = Tatoeba-MT models, red = HPLT-MT models
green = student models, grey = external models, purple = user-contributed
render chart with [
gd
] [plotly]
Model Scores (selected model)
ID
Language
Benchmark
Output
bleu
0
eng-gla
flores200-devtest
show
8.2
1
eng-gla
tatoeba-test-v2020-07-28
show
11.9
2
eng-gla
tatoeba-test-v2021-03-30
show
12
3
eng-gla
tatoeba-test-v2021-08-07
show
12
average
11.025