Spark in me - Internet, data science, math, deep learning, philo

snakers4 @ telegram, 1818 members, 1744 posts since 2016

All this - lost like tears in rain.

Data science, ML, a bit of philosophy and math. No bs.

Our website
- http://spark-in.me
Our chat
- https://t.me/joinchat/Bv9tjkH9JHbxiV5hr91a0w
DS courses review
- http://goo.gl/5VGU5A
- https://goo.gl/YzVUKf

snakers4 (Alexander), March 17, 15:40

New large dataset for you GAN or pix2pix pet project

500k fashion images + meta-data + landmarks

github.com/switchablenorms/DeepFashion2

#deep_learning

switchablenorms/DeepFashion2

DeepFashion2 Dataset https://arxiv.org/pdf/1901.07973.pdf - switchablenorms/DeepFashion2


snakers4 (Alexander), March 17, 05:41

youtu.be/jBsC34PxzoM

Cramer's rule, explained geometrically | Essence of linear algebra, chapter 12
This rule seems random to many students, but it has a beautiful reason for being true. Home page: https://www.3blue1brown.com/ Brought to you by you: http://...

New video from 3B1B

Which is kind of relevant

snakers4 (Alexander), March 14, 03:58

youtu.be/iM4PPGDQry0

GANPaint: An Extraordinary Image Editor AI
📝 The paper " GAN Dissection: Visualizing and Understanding Generative Adversarial Networks " and its web demo is available here: https://gandissect.csail.mi...

snakers4 (Alexander), March 12, 15:45

Our Transformer post was featured by Towards Data Science

medium.com/p/complexity-generalization-computational-cost-in-nlp-modeling-of-morphologically-rich-languages-7fa2c0b45909?source=email-f29885e9bef3--writer.postDistributed&sk=a56711f1436d60283d4b672466ba258b

#nlp

Comparing complex NLP models for complex languages on a set of real tasks

Transformer is not yet really usable in practice for languages with rich morphology, but we take the first step in this direction


snakers4 (Alexander), March 12, 11:53

New tricks for training CNNs

Forwarded from Just links:

arxiv.org/abs/1812.01187

Bag of Tricks for Image Classification with Convolutional Neural Networks

Much of the recent progress made in image classification research can be credited to training procedure refinements, such as changes in data augmentations and optimization methods. In the...


Forwarded from Just links:

DropBlock: A regularization method for convolutional networks arxiv.org/abs/1810.12890

snakers4 (Alexander), March 08, 14:38

Forwarded from Just links:

callingbullshit.org/index.html

Calling Bullshit: Data Reasoning in a Digital World

The world is awash in bullshit. Politicians are unconstrained by facts. Science is conducted by press release. Higher education rewards bullshit over analytic thought. Startup culture elevates bullshit to high art. Advertisers wink conspiratorially and invite us to join them in seeing through all the bullshit — and take advantage of our lowered guard to bombard us with bullshit of the second order. The majority of administrative activity, whether in private business or the public sphere, seems to be little more than a sophisticated exercise in the combinatorial reassembly of bullshit.


snakers4 (Alexander), March 08, 12:56

snakers4 (Alexander), March 07, 15:42

Our experiments with Transformers, BERT and generative language pre-training

TLDR

For morphologically rich languages pre-trained Transformers are not a silver bullet and from a layman's perspective they are not feasible unless someone invests huge computational resources into sub-word tokenization methods that work well + actually training these large networks.

On the other hand we have definitively shown that:

- Starting a transformer with Embedding bag initialized via FastText works and is relatively feasible;

- On complicated tasks - such transformer significantly outperforms training from scratch (as well as naive models) and shows decent results compared to state-of-the-art specialized models;

- Pre-training worked, but it overfitted more thatn FastText initialization and given the complexity required for such pre-training - it is not useful;

spark-in.me/post/bert-pretrain-ru

All in all this was a relatively large gamble, which did not pay off - on some more down-to-earth task we hoped the Transformer would excel at - it did not.

#deep_learning

Complexity / generalization /computational cost in modern applied NLP for morphologically rich languages

Complexity / generalization /computational cost in modern applied NLP for morphologically rich languages. Towards a new state of the art? Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


An approach to ranking search results with no annotation

Just a small article with a novel idea:

- Instead of training a network with CE - just train it with BCE;

- Source additional structure from the inner structure of your domain (tags, matrix decomposition methods, heuristics, etc);

spark-in.me/post/classifier-result-sorting

Works best if your ontology is relatively simple.

#deep_learning

Learning to rank search results without annotation

Solving search ranking problem Статьи автора - http://spark-in.me/author/adamnsandle Блог - http://spark-in.me


snakers4 (Alexander), March 07, 11:21

Inception v1 layers visualized on a map

A joint work by Google and OpenAI:

distill.pub/2019/activation-atlas/

distill.pub/2019/activation-atlas/app.html

blog.openai.com/introducing-activation-atlases/

ai.googleblog.com/2019/03/exploring-neural-networks.html

TLDR:

- Take 1M random images;

- Feed to a CNN, collect some spatial activation;

- Produce a corresponding idealized image that would result in such an activation;

- Plot in 2D (via UMAP), add grid, averaging, etc etc;

#deep_learning

Activation Atlas

By using feature inversion to visualize millions of activations from an image classification network, we create an explorable activation atlas of features the network has learned and what concepts it typically represents.


snakers4 (Alexander), March 07, 09:58

Russian STT datasets

Anyone knows more proper datasets?

I found this (60 hours), but I could not find the link to the dataset:

www.lrec-conf.org/proceedings/lrec2010/pdf/274_Paper.pdf

Anyway, here is the list I found:

- 20 hours of Bible github.com/festvox/datasets-CMU_Wilderness;

- www.kaggle.com/bryanpark/russian-single-speaker-speech-dataset - does not say how many hours

- Ofc audio book datasets - www.caito.de/data/Training/stt_tts/ + and some scraping scripts github.com/ainy/shershe/tree/master/scripts

- And some disappointment here voice.mozilla.org/ru/languages

#deep_learning

Download 274_Paper.pdf 0.31 MB

snakers4 (Alexander), March 07, 06:47

PyTorch internals

speakerdeck.com/perone/pytorch-under-the-hood

#deep_learning

PyTorch under the hood

Presentation about PyTorch internals presented at the PyData Montreal in Feb 2019.


snakers4 (Alexander), March 06, 10:31

5th 2019 DS / ML digest

Highlights of the week

- New Adam version;

- POS tagging and semantic parsing in Russian;

- ML industrialization again;

spark-in.me/post/2019_ds_ml_digest_05

#digest

#data_science

#deep_learning

2019 DS/ML digest 05

2019 DS/ML digest 05 Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


snakers4 (Alexander), March 05, 09:23

Anyone knows anyone from TopCoder?

As usual with competition platforms organization sometimes has its issues

Forwarded from Анна:

Привет!

Если кто не знает, кроме призовых за топ места, в спутниках была ещё одна классная фича - student's prize - приз для _студента_ с самым высоким скором. Там всё оказалось довольно неочевидно, отдельного лидерборда для студентов не было. Долго пыталась достучаться до админов, писала на почту, на форум, чтобы узнать больше подробностей. Спустя месяц админ таки ответил, что я единственный претендент на приз и, вроде, никаких проблем, всё улаживаем, кидай студак. И снова пропал. Периодически напоминала о своем существовании, интересовалась, как там дела, есть ли подвижки, в ответ игнор. *Ответа нет до сих пор.* Я впервые участвую в серьезном сореве и не совсем понимаю, что можно сделать в такой ситуации. Ждать новостей? Писать посты в твитер? Есть ли какой-то способ достучаться до админов?

Олсо, написала тут небольшую статейку про свое решение. spark-in.me/post/spacenet4

How I got to Top 10 in Spacenet 4 Challenge

Spacenet 4 Challenge: Building Footprints Статьи автора - http://spark-in.me/author/islanna Блог - http://spark-in.me


snakers4 (Alexander), March 04, 08:46

Tracking your hardware ... for data science

For a long time I though that if you really want to track all your servers' metrics you need Zabbix (which is very complicated).

A friend recommended me an amazing tool

- prometheus.io/docs/guides/node-exporter/

It installs and runs literally in minutes.

If you want to auto-start it properly, there are even a bit older Ubuntu packages and systemd examples

- github.com/prometheus/node_exporter/tree/master/examples/systemd

Dockerized metric exporters for GPUs by Nvidia

- github.com/NVIDIA/gpu-monitoring-tools/tree/master/exporters/prometheus-dcgm

It also features extensive alerting features, but they are very difficult to easily start, there being no minimal example

- prometheus.io/docs/alerting/overview/

- github.com/prometheus/docs/issues/581

#linux

Monitoring Linux host metrics with the Node Exporter | Prometheus

An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.


snakers4 (Alexander), March 02, 04:49

youtu.be/eUzB0L0mSCI

Can You Recover Sound From Images?
Is it possible to reconstruct sound from high-speed video images? Part of this video was sponsored by LastPass: http://bit.ly/2SmRQkk Special thanks to Dr. A...

snakers4 (Alexander), February 28, 07:16

LSTM vs TCN vs Trellis network

- Did not try the Trellis network - decided it was too complex;

- All the TCN properties from the digest spark-in.me/post/2018_ds_ml_digest_31 hold - did not test for very long sequences;

- Looks like a really simple and reasonable alternative for RNNs for modeling and ensembling;

- On a sensible benchmark - performes mostly the same as LSTM from a practical standpoint;

github.com/locuslab/TCN/blob/master/TCN/tcn.py

#deep_learning

2018 DS/ML digest 31

2018 DS/ML digest 31 Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


snakers4 (Alexander), February 27, 13:02

Dependency parsing and POS tagging in Russian

Less popular set of NLP tasks.

Popular tools reviewed

habr.com/ru/company/sberbank/blog/418701/

Only morphology:

(0) Well known pymorphy2 package;

Only POS tags and morphology:

(0) github.com/IlyaGusev/rnnmorph (easy to use);

(1) github.com/nlpub/pymystem3 (easy to use);

Full dependency parsing

(0) Russian spacy plugin:

- github.com/buriy/spacy-ru - installation

- github.com/buriy/spacy-ru/blob/master/examples/POS_and_syntax.ipynb - usage with examples

(1) Malt parser based solution (drawback - no examples)

- github.com/oxaoo/mp4ru

(2) Google's syntaxnet

- github.com/tensorflow/models/tree/master/research/syntaxnet

#nlp

Изучаем синтаксические парсеры для русского языка

Привет! Меня зовут Денис Кирьянов, я работаю в Сбербанке и занимаюсь проблемами обработки естественного языка (NLP). Однажды нам понадобилось выбрать синтаксичес...


snakers4 (Alexander), February 27, 12:39

We tried it

... yeah we tried it on a real task

just adam is a bit better

snakers4 (Alexander), February 27, 07:50

New variation of Adam?

- [Website](www.luolc.com/publications/adabound/);

- [Code](github.com/Luolc/AdaBound);

- Eliminate the generalization gap between adaptive methods and SGD;

- TL;DR: A Faster And Better Optimizer with Highly Robust Performance;

- Dynamic bound on learning rates. Inspired by gradient clipping;

- Not very sensitive to the hyperparameters, especially compared with Sgd(M);

- Tested on MNIST, CIFAR, Penn Treebank - no serious datasets;

#deep_learning

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

Abstract Adaptive optimization methods such as AdaGrad, RMSProp and Adam have been proposed to achieve a rapid training process with an element-wise scaling term on learning rates. Though prevailing, they are observed to generalize poorly compared with Sgd or even fail to converge due to unstable and extreme learning rates. Recent work has put forward some algorithms such as AMSGrad to tackle this issue but they failed to achieve considerable improvement over existing methods.


snakers4 (Alexander), February 18, 09:24

4th 2019 DS / ML digest

Highlights of the week

- OpenAI controversy;

- BERT pre-training;

- Using transformer for conversational challenges;

spark-in.me/post/2019_ds_ml_digest_04

#digest

#data_science

#deep_learning

2019 DS/ML digest 04

2019 DS/ML digest 04 Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


snakers4 (Alexander), February 17, 10:22

A bit of lazy Sunday admin stuff

Monitoring you CPU temperature with email notifications

- Change CPU temp to any metric you like

- Rolling log

- Sending email only one time, if the metric becomes critical (you can add an email when metric becomes non-critical again)

gist.github.com/snakers4/cf0ffd57c3ef7f4e2e25f6b3347dcdec

Setting up a GPU box on Ubuntu 18.04 from scratch

github.com/snakers4/gpu-box-setup/

#deep_learning

#linux

Plain temperature monitoring in Ubuntu 18.04

Plain temperature monitoring in Ubuntu 18.04. GitHub Gist: instantly share code, notes, and snippets.


snakers4 (Alexander), February 17, 08:49

Pinned post

What is this channel about?

(0)

This channel is a practitioner's channel on the following topics: Internet, Data Science, Deep Learning, Python, NLP

(1)

Don't get your opinion in a twist if your opinion differs.

You are welcome to contact me via telegram @snakers41 and email - [email protected]

(2)

No BS and ads - I already rejected 3-4 crappy ad deals

(4)

DS ML digests - in the RSS or via URLs like this

spark-in.me/post/2019_ds_ml_digest_01

Donations

(0)

Buy me a coffee 🤟 buymeacoff.ee/8oneCIN

Give us a rating:

(0)

telegram.me/tchannelsbot?start=snakers4

Our chat

(0)

t.me/joinchat/Bv9tjkH9JHYvOr92hi5LxQ

More links

(0)

Our website spark-in.me

(1)

Our chat t.me/joinchat/Bv9tjkH9JHYvOr92hi5LxQ

(2)

DS courses review (RU) - very old

goo.gl/5VGU5A

spark-in.me/post/learn-data-science

(3)

2017 - 2018 SpaceNet Challenge

spark-in.me/post/spacenet-three-challenge

(4)

DS Bowl 2018

spark-in.me/post/playing-with-dwt-and-ds-bowl-2018

(7)

Data Science tag on the website

spark-in.me/tag/data-science

(7)

Profi.ru project

towardsdatascience.com/building-client-routing-semantic-search-in-the-wild-14db04687c7e

(8)

CFT 2018 competition

spark-in.me/post/cft-spelling-2018

(9)

2018 retrospective

spark-in.me/post/2018

More amazing NLP-related articles incoming!

Maybe finally we will make podcasts?

2019 DS/ML digest 01

2019 DS/ML digest 01 Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


snakers4 (Alexander), February 14, 06:20

Whict type of content do you / would you like most on the channel?

  • Weekly / bi-weekly digests; (34)
  • Full articles; (13)
  • Podcasts with actual ML practicioners; (12)
  • Practical bits on real applied NLP; (28)
  • Pre-trained BERT with Embedding Bags for Russian; (11)
  • Paper reviews; (21)
  • Jokes / memes / cats; (9)

128 votes

snakers4 (Alexander), February 13, 09:56

*

(2) is valid for models with complex forward pass and models with large embedding layers

snakers4 (Alexander), February 13, 09:02

PyTorch NLP best practices

Very simple ideas, actually.

(1) Multi GPU parallelization and FP16 training

Do not bother reinventing the wheel.

Just use nvidia's apex, DistributedDataParallel, DataParallel.

Best examples [here](github.com/huggingface/pytorch-pretrained-BERT).

(2) Put as much as possible INSIDE of the model

Implement the as much as possible of your logic inside of nn.module.

Why?

So that you can seamleassly you all the abstractions from (1) with ease.

Also models are more abstract and reusable in general.

(3) Why have a separate train/val loop?

PyTorch 0.4 introduced context handlers.

You can simplify your train / val / test loops, and merge them into one simple function.

context = torch.no_grad() if loop_type=='Val' else torch.enable_grad()

if loop_type=='Train':
model.train()
elif loop_type=='Val':
model.eval()

with context:
for i, (some_tensor) in enumerate(tqdm(train_loader)):
# do your stuff here
pass

(4) EmbeddingBag

Use EmbeddingBag layer for morphologically rich languages. Seriously!

(5) Writing trainers / training abstractions

This is waste of time imho if you follow (1), (2) and (3).

(6) Nice bonus

If you follow most of these, you can train on as many GPUs and machines as you wan for any language)

(7) Using tensorboard for logging

This goes without saying.

#nlp

#deep_learning

huggingface/pytorch-pretrained-BERT

📖The Big-&-Extending-Repository-of-Transformers: Pretrained PyTorch models for Google's BERT, OpenAI GPT & GPT-2, Google/CMU Transformer-XL. - huggingface/pytorch-pretrained-BERT


PyTorch DataLoader, GIL thrashing and CNNs

Well all of this seems a bit like magic to me, but hear me out.

I abused my GPU box for weeks running CNNs on 2-4 GPUs.

Nothing broke.

And then my GPU box started shutting down for no apparent reason.

No, this was not:

- CPU overheating (I have a massive cooler, I checked - it works);

- PSU;

- Overclocking;

- It also adds to confusion that AMD has weird temperature readings;

To cut the story short - if you have a very fast Dataset class and you use PyTorch's DataLoader with workers > 0 it can lead to system instability instead of speeding up.

It is obvious in retrospect, but it is not when you face this issue.

#deep_learning

#pytorch

snakers4 (Alexander), February 12, 05:13

Russian thesaurus that really works

nlpub.ru/Russian_Distributional_Thesaurus#.D0.93.D1.80.D0.B0.D1.84_.D0.BF.D0.BE.D0.B4.D0.BE.D0.B1.D0.B8.D1.8F_.D1.81.D0.BB.D0.BE.D0.B2

It knows so many peculiar / old-fashioned and cheeky synonyms for obscene words!

#nlp

Russian Distributional Thesaurus

Russian Distributional Thesaurus (сокр. RDT) — проект создания открытого дистрибутивного тезауруса русского языка. На данный момент ресурс содержит несколько компонент: вектора слов (word embeddings), граф подобия слов (дистрибутивный тезаурус), множество гиперонимов и инвентарь смыслов слов. Все ресурсы были построены автоматически на основании корпуса текстов книг на русском языке (12.9 млрд словоупотреблений). В следующих версиях ресурса планируется добавление и векторов смыслов слов для русского языка, которые были получены на основании того же корпуса текстов. Проект разрабатывается усилиями представителей УрФУ, МГУ им. Ломоносова, Университета Гамбурга. В прошлом в проект внесли свой вклад исследователи из Южно-Уральского государственного университета, Дармштадского технического университета, Волверхемтонского университета и Университета Тренто.


snakers4 (Alexander), February 11, 06:29

Forwarded from Sava Kalbachou:

towardsdatascience.com/these-are-the-easiest-data-augmentation-techniques-in-natural-language-processing-you-can-think-of-88e393fd610

These are the Easiest Data Augmentation Techniques in Natural Language Processing you can think of — and they work.

Data augmentation is commonly used in computer vision. In vision, you can almost certainly flip, rotate, or mirror an image without risk…


snakers4 (Alexander), February 11, 06:22

Old news ... but Attention works

Funny enough, but in the past my models :

- Either did not need attention;

- Attention was implemented by @thinline72 ;

- The domain was so complicated (NMT) so that I had to resort to boilerplate with key-value attention;

It was the first time I / we tried manually building a model with plain self attention from scratch.

An you know - it really adds 5-10% to all of the tracked metrics.

Best plain attention layer in PyTorch - simple, well documented ... and it works in real life applications:

gist.github.com/cbaziotis/94e53bdd6e4852756e0395560ff38aa4

#nlp

#deep_learning

SelfAttention implementation in PyTorch

SelfAttention implementation in PyTorch. GitHub Gist: instantly share code, notes, and snippets.


snakers4 (Alexander), February 08, 16:20

youtu.be/DMXvkbAtHNY

DeepMind’s AlphaStar Beats Humans 10-0 (or 1)
DeepMind's #AlphaStar blog post: https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/ Full event: https://www.youtube.com/watc...

snakers4 (Alexander), February 08, 10:11

Third 2019 DS / ML digest

Highlights of the week

- quaternions;

- ODEs;

spark-in.me/post/2019_ds_ml_digest_03

#digest

#data_science

#deep_learning

2019 DS/ML digest 03

2019 DS/ML digest 03 Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


older first