[
restart
] [
share link
]
select language:
acm
afr
ajp
amh
ang
apc
ara
ara_Latn
arq
ary
arz
asm
ast
awa
aze
aze_Latn
bak
bam
bel
ben
ber
bho
bod
bos
bos_Latn
bre
bul
cat
cbk
ceb
ces
cha
chm
chv
ckb
cmn_Hans
cmn_Hant
cor
crh
cym
dan
deu
dsb
dtp
dzo
egl
ell
eng
epo
est
eus
ewe
fao
fas
fij
fil
fin
fkv
fra
frr
fry
fuc
ful
fur
fuv
gcf
gla
gle
glg
gos
got
grc
grn
gsw
guj
hat
hau
hbs
heb
hin
hoc
hoc_Latn
hrv
hrx
hsb
hun
hye
ibo
ido
ido_Latn
ile
ile_Latn
ilo
ina
ina_Latn
ind
isl
ita
jav
jbo
jbo_Latn
jpn
jpn_Hani
jpn_Hira
jpn_Kana
kab
kac
kan
kat
kaz
kaz_Cyrl
kea
kha
khm
kin
kir
kmr
kor
kor_Hang
kur
kur_Latn
lad
lad_Latn
lao
lat
lat_Latn
lav
lfn_Cyrl
lfn
lfn_Latn
lij
lim
lin
lit
liv
lmo
ltg
ltz
lug
luo
lus
mai
mal
mar
mkd
mlg
mlt
mni
mon
mri
msa
mya
nds
nep
nld
nno
nob
nor
nov
npi
nso
nst
nya
oci
orm
orv
ota_Arab
ota
ota_Latn
pag
pam
pan
pap
pcd
pes
plt
pmn
pms
pol
por
prg
prs
pus
rom
ron
run
rus
sag
san
sat
scn
shn
sin
slk
slv
smo
sna
snd
som
spa
sqi
srd
srp
srp_Cyrl
srp_Latn
sun
swa
swe
swg
swh
szl
tah
tam
tat
tel
tgk
tgl
tha
tir
tlh
tlh_Latn
toki
toki_Latn
ton
tpi
tso
tuk
tuk_Latn
tur
tzl
tzl_Latn
uig_Arab
uig
ukr
umb
urd
uzb
uzb_Latn
vec
vie
vol
war
wol
wuu
xal
xho
yid
yor
yue
yue_Hans
yue_Hant
zho
zho_Hans
zho_Hant
zsm
zsm_Latn
zul
zza
afr
apc
ara
arz
asm
ast
awa
bel
ben
bos
bul
cat
ces
chm
cmn_Hans
cmn_Hant
cor
cym
dan
deu
ell
eng
epo
est
eus
fao
fin
fra
gle
glg
guj
hbs
heb
hin
hrv
hun
hye
ind
isl
ita
jbo
jpn
jpn_Hani
jpn_Hira
jpn_Kana
kab
kor
lav
lim
lit
ltg
ltz
mar
mkd
msa
nds
nld
nno
nob
nor
npi
oci
pol
por
pus
ron
rus
scn
slk
slv
spa
srp
srp_Cyrl
srp_Latn
swe
szl
toki
toki_Latn
tpi
tur
ukr
urd
vie
yid
yue
yue_Hans
zho
zsm
[
swap
] [
compare scores
] [
compare models
] [
map
] [
release history
] [
uploads
]
OPUS-MT Dashboard
Language pair:
rus-eng
Models:
[all models] [
OPUS-MT
] [
external
] [
compare
]
Benchmark:
all benchmarks [
average score
]
Evaluation metric:
bleu [
spbleu
][
chrf
][
chrf++
][
comet
]
Chart Type:
[
standard
][diff]
blue = OPUS-MT / Tatoeba-MT models, grey = external models, purple = user-contributed
render chart with [
gd
] [plotly]
Model Scores (comparing between OPUS-MT and external models)
ID
Benchmark (bleu)
Output
OPUS-MT
bleu
external
bleu
Diff
0
flores101-devtest
compare
zle-eng/opus..2022-03-17
35.2
facebook/wmt19-ru-en
37.3
-2.1
1
flores200-devtest
compare
zle-eng/opus..2022-03-17
35.2
facebook/wmt19-ru-en
37.3
-2.1
2
newstest2012
compare
zle-eng/opus..2022-03-03
39.2
facebook/wmt19-ru-en
40
-0.8
3
newstest2013
compare
zle-eng/opus..2022-03-17
31.3
facebook/wmt19-ru-en
38.3
-7.0
4
newstest2014
compare
zle-eng/opus..2022-03-17
40.5
facebook/wmt19-ru-en
42.2
-1.7
5
newstest2015
compare
zle-eng/opus..2022-03-17
36.1
facebook/wmt19-ru-en
45
-8.9
6
newstest2016
compare
zle-eng/opus..2022-03-03
35.8
facebook/wmt19-ru-en
40
-4.2
7
newstest2017
compare
zle-eng/opus..2022-03-17
40.8
facebook/wmt19-ru-en
51.9
-11.1
8
newstest2018
compare
zle-eng/opus..2022-03-17
35.2
facebook/wmt19-ru-en
38.6
-3.4
9
newstest2019
compare
zle-eng/opus..2022-03-17
41.6
facebook/wmt19-ru-en
39
2.6
10
newstest2020
compare
zle-eng/opus..2022-03-17
36.9
facebook/wmt19-ru-en
38
-1.1
11
newstestB2020
compare
zle-eng/opus..2022-03-17
39.3
facebook/wmt19-ru-en
40.1
-0.8
12
tatoeba-test-v2020-07-28
compare
ru-en/opus-2020-01-16
60.6
facebook/nllb-200-3.3B
61.6
-1.0
13
tatoeba-test-v2021-03-30
compare
ru-en/opus-2020-02-26
60.1
facebook/nllb-200-3.3B
61.2
-1.1
14
tatoeba-test-v2021-08-07
compare
ru-en/opus-2020-02-26
58.2
facebook/nll..illed-1.3B
59.6
-1.4
15
tico19-test
compare
zle-eng/opus..2022-03-17
33.3
facebook/nllb-200-3.3B
36.4
-3.1
average
41.2
44.2
-2.9