An intro to RL
Though published by OpenAI with TF, this is simply amazing:
When it is colder, under full load GPUs run at 70C
Towards Data Science
Our article was accepted to their publication:
Also when you have published once there, then you can just publish your work on TDS on recurrent basis =)
I doubt that this will be properly distributed to all 130k of their subs, but nevertheless this is a milestone.
Playing with Transformer
TLDR - use only pre-trained.
On classification tasks performed the same as classic models.
On seq2seq - much worse time / memory wise. Inference is faster though.
Fast-text trained on a random mix of Russian Wikipedia / Taiga / Common Crawl
On our benchmarks was marginally better than fast-text trained on Araneum from Rusvectors.
Standard params - (3,6) n-grams + vector dimensionality is 300.
import fastText as ft
ft_model_big = ft.load_model('model')And then just refer to
A small saga about keeping GPUs cool
(1) 1-2 GPUs with blower fans (or turbo fans) in a full tower
-- idle 40-45C
-- full load - 80-85C
(2) 3-4 GPUs with blower fans (or turbo fans) in a full tower
-- idle - 45-55C
-- full load - 85-95С
Also with 3-4+ GPUs your room starts to heat up significantly + even without full fan speed / overclocking the sound is not very pleasant.
(0) Add a corrugated air duct to dump heat outside minus 3-5C under load;
(1) Add a high-pressure fan to blow between the GPUs minus 3-5C under load;
(2) Place the tower on the balcony minus 3-5C under load;
In the end it is possible to achieve <75C under full load on 4 or even 6 GPUs.
Also reposts on additional platforms
- Habr - habr.com/post/428674/
Please support us if you have an account.
Building client routing / semantic search and clustering arbitrary external corpuses at Profi.ru
A brief executive summary about what we achieved at Profi.ru.
If you have similar experience or have anything similar to share - please do not hesitate to contact me.
Also we are planning to extend this article into a small series, if it gains momentum. So please like / share the article if you like it.
Canonical one-hot encoding one-liner in PyTorch
Or 2 liner, whatever)
# src - is the input tensor (batch,indexes)
trg_oh = torch.FloatTensor(src.size(0), src.size(1), self.tgt_vocab).zero_().to(self.device)
trg_oh.scatter_(2, trg, 1)#deep_learning
Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks
- Essentially attention for semseg model - channel-wise attention, spatial and mixed attention
- Paper arxiv.org/abs/1803.02579
- Implementation www.kaggle.com/c/tgs-salt-identi
Do you read digests?
Yes – 37
Love them – 12
No – 11
I have an idea how to improve them (PM me) – 1
👥 61 people voted so far.
In case of github failure
They have a blog with current statuses
Amazing articles about image hashing
Also a python library
- Library github.com/JohannesBuchner/image
Text iterators in PyTorch
Looks like PyTorch has some handy data-processing / loading tools for text models - torchtext.readthedocs.io.
It is explained here - bastings.github.io/annotated_enc
I guess PyTorch is in the bottom left corner, but realistically the author of this snippet did a lot of import A as B
Google's super resolution zoom
Finally Google made something interesting
Mixed precision distributed training ImageNet example in PyTorch
An Open source alternative to Mendeley
Looks like that Zotero is also cross-platform, and open-source
Also you can import the whole Mendeley library with 1 button push:
Zotero is a free, easy-to-use tool to help you collect, organize, cite, and share research.