November 22, 2018

Our victory in CFT-2018 competition

TLDR

- Multi-task learning + seq2seq models rule;

- The domain seems to be easy, but it is not;

- You can also build a pipeline based on manual features, but it will not be task agnostic;

- Loss weighting is crucial for such tasks;

- Transformer trains 10x longer;

spark-in.me/post/cft-spelling-2018

#nlp

#deep_learning

#data_science

Winning a CFT 2018 spelling correction competition

Building a task-agnostic seq2seq pipeline on a challenging domain Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me