We tried it
... yeah we tried it on a real task
just adam is a bit better
We tried it
... yeah we tried it on a real task
just adam is a bit better
New variation of Adam?
Eliminate the generalization gap between adaptive methods and SGD;
TL;DR: A Faster And Better Optimizer with Highly Robust Performance;
- Dynamic bound on learning rates. Inspired by gradient clipping;
- Not very sensitive to the hyperparameters, especially compared with Sgd(M);
- Tested on MNIST, CIFAR, Penn Treebank - no serious datasets;
Abstract Adaptive optimization methods such as AdaGrad, RMSProp and Adam have been proposed to achieve a rapid training process with an element-wise scaling term on learning rates. Though prevailing, they are observed to generalize poorly compared with Sgd or even fail to converge due to unstable and extreme learning rates. Recent work has put forward some algorithms such as AMSGrad to tackle this issue but they failed to achieve considerable improvement over existing methods.
4th 2019 DS / ML digest
Highlights of the week
- OpenAI controversy;
- BERT pre-training;
- Using transformer for conversational challenges;
A bit of lazy Sunday admin stuff
Monitoring you CPU temperature with email notifications
- Change CPU temp to any metric you like
- Rolling log
- Sending email only one time, if the metric becomes critical (you can add an email when metric becomes non-critical again)
Setting up a GPU box on Ubuntu 18.04 from scratch
What is this channel about?
This channel is a practitioner's channel on the following topics: Internet, Data Science, Deep Learning, Python, NLP
Don't get your opinion in a twist if your opinion differs.
No BS and ads - I already rejected 3-4 crappy ad deals
DS ML digests - in the RSS or via URLs like this
Buy me a coffee 🤟 buymeacoff.ee/8oneCIN
Give us a rating:
Our website spark-in.me
Our chat t.me/joinchat/Bv9tjkH9JHYvOr92hi
DS courses review (RU) - very old
2017 - 2018 SpaceNet Challenge
DS Bowl 2018
Data Science tag on the website
CFT 2018 competition
More amazing NLP-related articles incoming!
Maybe finally we will make podcasts?
Whict type of content do you / would you like most on the channel?
(2) is valid for models with complex forward pass and models with large embedding layers
PyTorch NLP best practices
Very simple ideas, actually.
(1) Multi GPU parallelization and FP16 training
Do not bother reinventing the wheel.
Just use nvidia's
Best examples [here](github.com/huggingface/pytorch-p
(2) Put as much as possible INSIDE of the model
Implement the as much as possible of your logic inside of
So that you can seamleassly you all the abstractions from (1) with ease.
Also models are more abstract and reusable in general.
(3) Why have a separate train/val loop?
PyTorch 0.4 introduced context handlers.
You can simplify your train / val / test loops, and merge them into one simple function.
context = torch.no_grad() if loop_type=='Val' else torch.enable_grad()
for i, (some_tensor) in enumerate(tqdm(train_loader)):
# do your stuff here
Use EmbeddingBag layer for morphologically rich languages. Seriously!
(5) Writing trainers / training abstractions
This is waste of time imho if you follow (1), (2) and (3).
(6) Nice bonus
If you follow most of these, you can train on as many GPUs and machines as you wan for any language)
(7) Using tensorboard for logging
This goes without saying.
PyTorch DataLoader, GIL thrashing and CNNs
Well all of this seems a bit like magic to me, but hear me out.
I abused my GPU box for weeks running CNNs on 2-4 GPUs.
And then my GPU box started shutting down for no apparent reason.
No, this was not:
- CPU overheating (I have a massive cooler, I checked - it works);
- It also adds to confusion that AMD has weird temperature readings;
To cut the story short - if you have a very fast Dataset class and you use PyTorch's DataLoader with
workers > 0 it can lead to system instability instead of speeding up.
It is obvious in retrospect, but it is not when you face this issue.
Russian thesaurus that really works
It knows so many peculiar / old-fashioned and cheeky synonyms for obscene words!
Russian Distributional Thesaurus (сокр. RDT) — проект создания открытого дистрибутивного тезауруса русского языка. На данный момент ресурс содержит несколько компонент: вектора слов (word embeddings), граф подобия слов (дистрибутивный тезаурус), множество гиперонимов и инвентарь смыслов слов. Все ресурсы были построены автоматически на основании корпуса текстов книг на русском языке (12.9 млрд словоупотреблений). В следующих версиях ресурса планируется добавление и векторов смыслов слов для русского языка, которые были получены на основании того же корпуса текстов. Проект разрабатывается усилиями представителей УрФУ, МГУ им. Ломоносова, Университета Гамбурга. В прошлом в проект внесли свой вклад исследователи из Южно-Уральского государственного университета, Дармштадского технического университета, Волверхемтонского университета и Университета Тренто.
Old news ... but Attention works
Funny enough, but in the past my models :
- Either did not need attention;
- Attention was implemented by @thinline72 ;
- The domain was so complicated (NMT) so that I had to resort to boilerplate with key-value attention;
It was the first time I / we tried manually building a model with plain self attention from scratch.
An you know - it really adds 5-10% to all of the tracked metrics.
Best plain attention layer in PyTorch - simple, well documented ... and it works in real life applications:
A new paradigm in ML?
In this blogpost I explore how ODE’s can be used to solve data modelling problems. I take a deep dive into the data modelling problem at hand and present ODE’s (which model rates of change) as an a...
Jupiter widgets + pandas
With the @interact decorator, the IPywidgets library automatically gives us a text box and a slider for choosing a column and number! It looks at the inputs
Serialization of large objects in Python
So far found no sane way for this with 1M chunks / 10GB+ object size.
Of course, chunking / plain
Feather / parquet - fail with 2+GB size.
Pickle works, but it is kind of slow.
Downsides of using Common Crawl
Took a look at the Common Crawl data I myself pre-processed last year and could not find abstracts - only sentences.
Took a look at these - archives - data.statmt.org/ngrams/deduped/ - also only sentences, though they seem to be in logical order sometimes.
You can use any form of CC - but only to learn word representations. Not sentences.
Neat PyTorch hack
(1) If possible Implement your complex loss / logic within your model.forward()
(2) Enjoy the multi-GPU / multi-node training wrappers from APEX, PyTorch DataParallel, DistributedDataParallel etc
NLP - Highlight of the week - LASER
- Hm, a new sentence embedding tool?
- Plain PyTorch 1.0 / numpy / FAISS based;
- Looks like an off-shoot of their "unsupervised" NMT project;
- Alleged pros:
LASER’s vector representations of sentences are generic with respect to both the
input language and the NLP task. The tool maps a sentence in any language to
point in a high-dimensional space with the goal that the same statement in any
language will end up in the same neighborhood. This representation could be seen
as a universal language in a semantic vector space. We have observed that the
distance in that space correlates very well to the semantic closeness of the
They essentially trained an NMT model with a shared encoder for many languages.
It delivers extremely fast performance, processing up to 2,000 sentences per second on GPU.
The sentence encoder is implemented in PyTorch with minimal external dependencies.
Languages with limited resources can benefit from joint training over many languages.
The model supports the use of multiple languages in one sentence.
Performance improves as new languages are added, as the system learns to recognize characteristics of language families.
I tried training sth similar - but it quickly over-fitted into just memorizing the indexes of words.
Pre-trained BERT in PyTorch
Model code here is just awesome.
Integrated DataParallel / DDP wrappers / FP16 wrappers also are awesome.
FP16 precision training from APEX just works (no idea about convergence though yet).
As for model weights - I cannot really tell, there is no dedicated Russian model.
The only problem I am facing now - using large embeddings bags batch size is literally
1-4 even for smaller models.
And training models with sentence piece is kind of feasible for rich languages, but you will always worry about generalization.
Did not try the generative pre-training (and sentence prediction pre-training), I hope that properly initializing embeddings will also work for a closed domain with a smaller model (they pre-train 4 days on 4+ TPUs, lol).
Why even tackle such models?
Chat / dialogue / machine comprehension models are complex / require one-off feature engineering.
Being able to tune something like BERT on publicly available benchmarks and then on your domain can provide a good way to embed complex situations (like questions in dialogues).
New amazing video by 3B1B
Someone implemented instance weighted CE loss for PyTorch
Wrote about this a year ago.
Forgot about it, a friend reminded me.
You can pass lists to the python command line arguments.
parser.add_argument('--classifier_conf', default=[512, 2048, 5005], nargs='+', type=int)
and then just add params to your call as follows
--classifier_conf 512 2048 5005
Linux subsystem in Windows 10
It works and installs in literally 2 clicks (run one command in Powershell and then just one-click install your Linux distro of choice in Windows Store (yes, this very funny indeed))!
Why would you need this?
To make and backup files on one command for example =)
Something like this becomes reality on Windows:
cd /mnt/d/ && \
TIME=`date +%b-%d-%y` && \
FILENAME=working_files_tar-$TIME.tar.gz && \
INCREMENTAL_FILE=backup_data.snar && \
echo 'Using folderlist' $FOLDERS && \
tar -czg $(<folders_backup.txt) --listed-incremental=$INCREMENTAL_FILE --verbose -f $FILENAME
Also, you may add
scp and you are good to go!
Also other potential use cases:
- You are somehow vendor locked (I depend on proprietary drivers for my thunderbolt port to attach an external GPU) or just are used to Windows' windows (or are just lazy to install Linux);
- You need one particular Linux program or you need to quickly test something / do not want to bother replicating your environment under Windows (yes, you can also run Docker, but there will be some learning curve);
- You run all of your programs remotely, and use your Windows machine as a thin client, but sometimes you need git / bash / rsync - i.e. to download movies from your personal NAS;
ML trends in 2019?