Spark in me - Internet, data science, math, deep learning, philo

snakers4 @ telegram, 1319 members, 1513 posts since 2016

All this - lost like tears in rain.

Data science, deep learning, sometimes a bit of philosophy and math. No bs.

Our website
- spark-in.me
Our chat
- goo.gl/WRm93d
DS courses review
- goo.gl/5VGU5A
- goo.gl/YzVUKf

Posts by tag «deep_learning»:

snakers4 (Alexander), July 15, 08:54

Sometimes in supervised ML tasks leveraging the data sctructure in a self-supervised fashion really helps!

Playing with CrowdAI mapping competition

In my opinion it is a good test-ground for testing your ideas with SemSeg - as the dataset is really clean and balanced

spark-in.me/post/a-small-case-for-search-of-structure-within-your-data

#deep_learning

#data_science

#satellite_imaging

Playing with Crowd-AI mapping challenge - or how to improve your CNN performance with self-supervised techniques

In this article I tell about a couple of neat optimizations / tricks / useful ideas that can be applied to many SemSeg / ML tasks Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


snakers4 (spark_comment_bot), July 13, 05:22

2018 DS/ML digest 17

Highlights of the week

(0) Troubling trends with ML scholars

approximatelycorrect.com/2018/07/10/troubling-trends-in-machine-learning-scholarship/

(1) NLP close to its ImageNet stage?

thegradient.pub/nlp-imagenet/

Papers / posts / articles

(0) Working with multi-modal data distill.pub/2018/feature-wise-transformations/

- concatenation-based conditioning

- conditional biasing or scaling ("residual" connections)

- sigmoidal gating

- all in all this approach seems like a mixture of attention / gating for multi-modal problems

(1) Glow, a reversible generative model which uses invertible 1x1 convolutions

blog.openai.com/glow/

(2) Facebooks moonshots - I kind of do not understand much here

- research.fb.com/facebook-research-at-icml-2018/

(3) RL concept flaws?

- thegradient.pub/why-rl-is-flawed/

(4) Intriguing failures of convolutions

eng.uber.com/coordconv/ - this is fucking amazing

(5) People are only STARTING to apply ML to reasoning

deepmind.com/blog/measuring-abstract-reasoning/

Yet another online book on Deep Learning

(1) Kind of standard livebook.manning.com/#!/book/grokking-deep-learning/chapter-1/v-10/1

Libraries / code

(0) Data version control continues to develop dvc.org/features

#deep_learning

#data_science

#digest

Like this post or have something to say => tell us more in the comments or donate!

Troubling Trends in Machine Learning Scholarship

By Zachary C. Lipton* & Jacob Steinhardt* *equal authorship Originally presented at ICML 2018: Machine


snakers4 (Alexander), July 11, 06:51

TF 1.9

github.com/tensorflow/tensorflow/releases/tag/v1.9.0

Funnily enough, they call Keras not "Keras with TF back-end", but "tf.keras"

xD

#deep_learning

tensorflow/tensorflow

tensorflow - Computation using data flow graphs for scalable machine learning


snakers4 (Alexander), July 09, 09:04

2018 DS/ML digest 16

Papers / posts

(0) RL now solves Quake

venturebeat.com/2018/07/03/googles-deepmind-taught-ai-teamwork-by-playing-quake-iii-arena/

(1) A fast.ai post about AdamW

www.fast.ai/2018/07/02/adam-weight-decay/

-- Adam generally requires more regularization than SGD, so be sure to adjust your regularization hyper-parameters when switching from SGD to Adam

-- Amsgrad turns out to be very disappointing

-- Refresher article ruder.io/optimizing-gradient-descent/index.html#nadam

(2) How to tackle new classes in CV

petewarden.com/2018/07/06/what-image-classifiers-can-do-about-unknown-objects/

(3) A new word in GANs?

-- ajolicoeur.wordpress.com/RelativisticGAN/

-- arxiv.org/pdf/1807.00734.pdf

(4) Using deep learning representations for search

-- goo.gl/R1vhTh

-- library for fast search on python github.com/spotify/annoy

(5) One more paper on GAN convergence

avg.is.tuebingen.mpg.de/publications/meschedericml2018

(6) Switchable normalization - adds a bit to ResNet50 + pre-trained models

github.com/switchablenorms/Switchable-Normalization

Datasets

(0) Disney starts to release datasets

www.disneyanimation.com/technology/datasets

Market / interesting links

(0) A motion to open-source GitHub

github.com/dear-github/dear-github/issues/304

(1) Allegedly GTX 1180 start in sales appearing in Asia (?)

(2) Some controversy regarding Andrew Ng and self-driving cars goo.gl/WNW4E3

(3) National AI strategies overviewed - goo.gl/BXDCD7

-- Canada C$135m

-- China has the largest strategy

-- Notably - countries like Finland also have one

(4) Amazon allegedly sells face recognition to the USA goo.gl/eDzekn

#data_science

#deep_learning

Google’s DeepMind taught AI teamwork by playing Quake III Arena

Google’s DeepMind today shared the results of training multiple AI systems to play Capture the Flag on Quake III Arena, a multiplayer first-person shooter game. The AI played nearly 450,000 g…


snakers4 (spark_comment_bot), July 07, 12:29

Playing with VAEs and their practical use

So, I played a bit with Variational Auto Encoders (VAE) and wrote a small blog post on this topic

spark-in.me/post/playing-with-vae-umap-pca

Please like, share and repost!

#deep_learning

#data_science

Like this post or have something to say => tell us more in the comments or donate!

Playing with Variational Auto Encoders - PCA vs. UMAP vs. VAE on FMNIST / MNIST

In this article I thoroughly compare the performance of VAE / PCA / UMAP embeddings on a simplistic domain - UMAP Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


snakers4 (Alexander), July 04, 07:57

2018 DS/ML digest 15

What I filtered through this time

Market / news

(0) Letters by big company employees against using ML for weapons

- Microsoft

- Amazon

(1) Facebook open sources Dense Pose (eseentially this is Mask-RCNN)

- research.fb.com/facebook-open-sources-densepose/

Papers / posts / NLP

(0) One more blog post about text / sentence embeddings goo.gl/Zm8C2c

- key idea different weighting

(1) One more sentence embedding calculation method

- openreview.net/pdf?id=SyK00v5xx ?

(2) Posts explaing NLP embeddings

- www.offconvex.org/2015/12/12/word-embeddings-1/ - some basics - SVD / Word2Vec / GloVe

-- SVD improves embedding quality (as compared to ohe)?

-- use log-weighting, use TF-IDF weighting (the above weighting)

- www.offconvex.org/2016/02/14/word-embeddings-2/ - word embedding properties

-- dimensions vs. embedding quality www.cs.princeton.edu/~arora/pubs/LSAgraph.jpg

(3) Spacy + Cython = 100x speed boost - goo.gl/9TwVqu - good to know about this as a last resort

- described use-case

you are pre-processing a large training set for a DeepLearning framework like pyTorch/TensorFlow

or you have a heavy processing logic in your DeepLearning batch loader that slows down your training

(4) Once again stumbled upon this - blog.openai.com/language-unsupervised/

(5) Papers

- Simple NLP embedding baseline goo.gl/nGujzS

- NLP decathlon for question answering goo.gl/6HHi7q

- Debiasing embeddings arxiv.org/abs/1806.06301

- Once again transfer learning in NLP by open-AI - goo.gl/82VR4U

#deep_learning

#digest

#data_science

Download full.pdf 0.04 MB

snakers4 (Alexander), July 04, 05:12

Open Images Object detection on Kaggle

- www.kaggle.com/c/google-ai-open-images-object-detection-track#Description

- Key ideas

-- 1.2 images, high-res, 500 classes

-- decent prizes, but short time-span (2 months)

-- object detection

#deep_learning

Google AI Open Images - Object Detection Track

Detect objects in varied and complex images.


snakers4 (Alexander), July 03, 07:15

A cool article from Ben Evans about how to think about ML

www.ben-evans.com/benedictevans/2018/06/22/ways-to-think-about-machine-learning-8nefy

Ways to think about machine learning

We're now four or five years into the current explosion of machine learning, and pretty much everyone has heard of it, and every big company is working on projects around ‘AI’. We know this is a Next Big Thing. I don't think, though, that we yet have a settled sense of quite what machine learning m


My recent PyTorch 0.4 Dockerfile for CV

gist.github.com/snakers4/72ccc3d936f04a3307d20f1810b2fa81

#deep_learning

My PyTorch 0.4 Dockerfile


snakers4 (Alexander), July 02, 04:51

2018 DS/ML digest 14

Amazing article - why you do not need ML

- cyberomin.github.io/startup/2018/07/01/sql-ml-ai.html

- I personally love plain-vanilla SQL and in 90% of cases people under-use it

- I even wrote 90% of my JSON API on our blog in pure PostgreSQL xD

Practice / papers

(0) Interesting papers from CVPR towardsdatascience.com/the-10-coolest-papers-from-cvpr-2018-11cb48585a49

(1) Some down-to-earth obstacles to ML deploy habr.com/company/hh/blog/415437/

(2) Using synthetic data for CNNs (by Nvidia) - arxiv.org/pdf/1804.06516.pdf

(3) This puzzles me - so much effort and engineering spent on something ... strange and useless - taskonomy.stanford.edu/index.html

On paper they do a cool thing - investigate transfer learning between different domains, but in practice it is done on TF and there is no clear conclusion of any kind

(4) VAE + real datasets siavashk.github.io/2016/02/22/autoencoder-imagenet/ - only small Imagenet (64x64)

(5) Understanding the speed of models deployed on mobile - machinethink.net/blog/how-fast-is-my-model/

(6) A brief overview of multi-modal methods medium.com/mlreview/multi-modal-methods-image-captioning-from-translation-to-attention-895b6444256e

Visualizations / explanations

(0) Amazing website with ML explanations explained.ai/

(1) PCA and linear VAEs are close pvirie.wordpress.com/2016/03/29/linear-autoencoders-do-pca/

#deep_learning

#digest

#data_science

No, you don't need ML/AI. You need SQL

A while ago, I did a Twitter thread about the need to use traditional and existing tools to solve everyday business problems other than jumping on new buzzwords, sexy and often times complicated technologies.


snakers4 (Alexander), June 28, 15:22

Playing with PyTorch 0.4

It was released some time ago

If you are not aware - this is the best summary

pytorch.org/2018/04/22/0_4_0-migration-guide.html

My first-hand experiences

- Multi-GPU support works strangely

- If you just launch your 0.3 code it will work on 0.4 with warnings - not a really breaking change

- All the new features are really cool, useful and make using PyTorch even more delightful

- I especially liked how they added context managers and cleaned up the device mess

#deep_learning

snakers4 (Alexander), June 28, 11:18

DL Framework choice - 2018

If you are still new to DL / DS / ML and have not yet chosen your framework, consider reading this before proceeding

- deepsense.ai/keras-or-pytorch/

#deep_learning

snakers4 (Alexander), June 28, 07:43

2018 DS/ML digest 13

Blog posts / articles:

(0) Google notes on CNN generalization - goo.gl/XS4KAw

(1) Google to teaching robots in virtual environment and then trasferring models to reality - goo.gl/aAYCqE

(2) Google's object tracking via image colorization - goo.gl/xchvBQ

(2) Interesting articles about VAEs:

- A small intro into VAEs habr.com/company/otus/blog/358946/

- A small intuitive intro (super super cool and intuitive)

towardsdatascience.com/intuitively-understanding-variational-autoencoders-1bfe67eb5daf

- KL divergence explained

www.countbayesie.com/blog/2017/5/9/kullback-leibler-divergence-explained

- A more formal write-up arxiv.org/abs/1606.05908

- In (RU) habr.com/company/otus/blog/358946/

- Converting a FC layer into a conv layer cs231n.github.io/convolutional-networks/#convert

- A post by Fchollet blog.keras.io/building-autoencoders-in-keras.html

A good in-depth write-up on object detection:

- machinethink.net/blog/object-detection/

- finally a decent explanation of YOLO parametrization machinethink.net/images/object-detection/[email protected]

- best comparison of YOLO and SSD ever - machinethink.net/images/object-detection/[email protected]

Papers with interesting abstracts (just good to know sich things exist)

- Low-bit CNNs - ai.intel.com/nervana/wp-content/uploads/sites/53/2018/06/ELQ_CameraReady_CVPR2018.pdf

- Automated Meta ML - arxiv.org/abs/1806.06927

- Idea - use ResNet blocks for boosting - arxiv.org/abs/1706.04964

- 2D-discrete-Fourier transform (2D-DFT) to encode rotational invariance in neural networks - arxiv.org/abs/1805.12301

- Smallify the CNNs - arxiv.org/abs/1806.03723

- BLEU review as a metric - conclusion - it is good on average to measure MT performance - www.mitpressjournals.org/doi/abs/10.1162/COLI_a_00322

"New" ideas in SemSeg:

- UNET + conditional VAE arxiv.org/abs/1806.05034

- Dilated convolutions for larget satellite images arxiv.org/abs/1709.00179 - looks like that this works only if you have high resolution with small objects

#digest

#deep_learning

How Can Neural Network Similarity Help Us Understand Training and Generalization?

Posted by Maithra Raghu, Google Brain Team and Ari S. Morcos, DeepMind In order to solve tasks, deep neural networks (DNNs) progressively...


snakers4 (Alexander), June 26, 07:02

If someone needs a dataset, Kaggle launched ImageNet object detection

- www.kaggle.com/c/imagenet-object-localization-challenge#description

There is an open images dataset, which I guess is bigger though

#deep_learning

ImageNet Object Localization Challenge

Identify the objects in images


snakers4 (Alexander), June 25, 10:53

A subscriber sent a really decent CS university scientific ranking

csrankings.org/#/index?all&worldpu

Useful, if you want to apply for CS/ML based Ph.D. there

#deep_learning

Transformer in PyTorch

Looks like somebody implement recent Google's transformer fine-tuning in PyTorch

github.com/huggingface/pytorch-openai-transformer-lm

Nice!

#nlp

#deep_learning

huggingface/pytorch-openai-transformer-lm

pytorch-openai-transformer-lm - A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI


snakers4 (spark_comment_bot), June 21, 14:13

Playing with multi-GPU small batch-sizes

If you play with SemSeg with a big model with large images (HD, FullHD) - you may face a situation when only one image fits to one GPU.

Also this is useful if your train-test split is far from ideal and or you are using pre-trained imagenet encoders for a SemSeg task - so you cannot really update your bnorm params.

Also AFAIK - all the major deep-learning frameworks:

(0) do not have batch norm freeze options on evaluation (batch-norm contains 2 sets of parameters - learnable and updated on inference

(1) calculate batch-norm for each GPU separately

It all may mean, that your models may severely underperform in inference for these situations.

Solutions?

(0) Sync batch-norm. I believe to do it properly you will have to modify the framework you are using, but there is a PyTorch implementation done for the CVPR 2018 - also an explanation here hangzh.com/PyTorch-Encoding/notes/syncbn.html - I guess if its multi-GPU wrappers for model can be used for any models - then we are in the money)

(1) Use affine=False in your batch-norm. But probably in this case imagenet initialization will not help - you will have to train your model from scratch completely

(2) Freeze your encoder batch-norm params completely

discuss.pytorch.org/t/how-to-train-with-frozen-batchnorm/12106/10 (though I am not sure - they do not seem to be freezing the running mean parameters) - probably this also needs m.trainable = False or something like this

(3) Use recent Facebook group norm - arxiv.org/pdf/1803.08494.pdf

This is a finicky topic - please tell in comments about your experiences and tests

#deep_learning

#cv

Like this post or have something to say => tell us more in the comments or donate!

How to train with frozen BatchNorm?

Since pytorch does not support syncBN, I hope to freeze mean/var of BN layer while trainning. Mean/Var in pretrained model are used while weight/bias are learnable. In this way, calculation of bottom_grad in BN will be different from that of the novel trainning mode. However, we do not find any flag in the function bellow to mark this difference. pytorch/torch/csrc/cudnn/BatchNorm.cpp void cudnn_batch_norm_backward( THCState* state, cudnnHandle_t handle, cudnnDataType_t dataType, THVo...


snakers4 (Alexander), June 10, 15:35

And now the habr.ru article is also live -

habr.com/post/413775/

Please support us with your likes!

#deep_learning

#data_science

Состязательные атаки (adversarial attacks) в соревновании Machines Can See 2018

Или как я оказался в команде победителей соревнования Machines Can See 2018 adversarial competition. Суть любых состязательных атак на примере. Так уж...


snakers4 (Alexander), June 10, 06:50

An interesting idea from a CV conference

Imagine that you have some kind of algorithm, that is not exactly differentiable, but is "back-propable".

In this case you can have very convoluted logic in your "forward" statement (essentially something in between trees and dynamic programming) - for example a set of clever if-statements.

In this case you will be able to share both of the 2 worlds - both your algorithm (you will have to re-implement in your framework) and backprop + CNN. Nice.

Ofc this works only for dynamic deep-learning frameworks.

#deep_learning

#data_science

Machines Can See 2018 adversarial competition

Happened to join forces with a team that won 2nd place in this competition

- spark-in.me/post/playing-with-mcs2018-adversarial-attacks

It was very entertaining and a new domain to me.

Read more materials:

- Our repo github.com/snakers4/msc-2018-final

- Our presentation drive.google.com/file/d/1P-4AdCqw81nOK79vU_m7IsCVzogdeSNq/view

- All presentations drive.google.com/file/d/1aIUSVFBHYabBRdolBRR-1RKhTMg-v-3f/view

#data_science

#deep_learning

#adversarial

Playing with adversarial attacks on Machines Can See 2018 competition

This article is about MCS 2018 competition and my participation in it, adversarial attack methods and how out team won Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


snakers4 (spark_comment_bot), June 06, 07:55

2018 DS/ML digest 11

Datasets

(0)

New Andrew Ng paper on radiology datasets

YouTube 8M Dataset post

As mentioned before - this is more or less blatant TF marketing

New papers / models / architectures

(0) Google RL search for optimal augmentations

- Blog, paper

- Finally Google paid attention to augmentations

- 83.54% top1 accuracy on ImageNet

- Discrete search problem, each policy consists of 5 sub-policies each each operation associated with two hyperparameters: probability and magnitude

- Training regime cosine decay for 200 epochs

- Top accuracy on ImageNet

- Best policy

- Typical examples of augmentations

(1)

Training CNNs with less data

Key idea - with clever selection of data you can decrease annotation costs 2-3x

(2)

Regularized Evolution for Image Classifier Architecture Search (AmoebaNet)

- The first controlled comparison of the two search algorithms (genetic and RL)

- Mobile-size ImageNet (top-1 accuracy = 75.1% with 5.1M parameters)

- ImageNet (top-1 accuracy = 83.1%)

Evolution vs. RL at Large-Compute Scale

• Evolution and RL do equally well on accuracy

• Both are significantly better than Random Search

• Evolution is faster

But the proper description of the architecture is nowhere to be seen...

Libraries / code / frameworks

(0) OpenCV installation for Ubuntu18 from source (if you need e.g. video support)

News / market

(0) Idea adversarial filters for apps - goo.gl/L4Vne7

(1) A list of 30 best practices for amateur ML / DL specialits - forums.fast.ai/t/30-best-practices/12344

- Some ideas about tackling naive NLP problems

- PyTorch allegedly supports just freezing bn layers

- Also a neat idea I tried with inception nets - assign different learning rates to larger models when fine-tuning them

(2) Stumbled upon a reference on NAdam as optimizer as being a bit better than Adam

It is also described in this popular article

(3) Barcode reader via OpenCV

#deep_learning

#digest

Like this post or have something to say => tell us more in the comments or donate!

snakers4 (Alexander), June 05, 14:42

A very useful combination in tmux

You can resize your panes via pressing

- first ctrl+b

- hold ctrl

- press arrow keys several time holding ctrl

...

- profit

#linux

#deep_learning

Digest about Internet

(0) Ben Evans Internet digest - goo.gl/uoQCBb

(1) GitHub purchased by Microsoft - goo.gl/49X74r

-- If you want to migrate - there are guides already - about.gitlab.com/2018/06/03/movingtogitlab/

(2) And a post on how Microsoft kind of ruined Skype - goo.gl/Y7MJJL

-- focus on b2b

--lack of focus, constant redesigns, faltering service

(3) No drop in FB usage after its controversies - goo.gl/V93j2v

(4) Facebook allegedly employes 1200 moderators for Germany - goo.gl/VBcYQQ

(5) Looks like many Linux networking tools have been outdated for years

dougvitale.wordpress.com/2011/12/21/deprecated-linux-networking-commands-and-their-replacements/

#internet

#digest

snakers4 (Alexander), May 31, 07:25

New cool papers on CNNs

(0) Do Better ImageNet Models Transfer Better?

An implicit hypothesis in modern computer vision research is that models that perform better on ImageNet necessarily perform better on other vision tasks.

However, this hypothesis has never been systematically tested.

- Wow an empiric study why ResNets rule - they are just better non-finetuned feature extractors and then are probably easier to fine-tune

- ResNets are the best fixed feature extractors

- Also ImageNet pretraining accelerates convergence

- Also my note is that inception-based models are more difficult to fine-tune.

- Among top ranking models are - Inception, NasNet, AmoebaNet

- Also my personal remark - any CNN architecture can be ft-ed to be relatively good, you just need to invent a proper training regime

Just the abstract says it all

Here, we compare the performance of 13 classification models on 12 image classification tasks in three settings: as fixed feature extractors, fine-tuned, and trained from random initialization. We find that, when networks are used as fixed feature extractors, ImageNet accuracy is only weakly predictive of accuracy on other tasks (r2 = 0.24). In this setting, ResNets consistently outperform networks that achieve higher accuracy on ImageNet. When networks are fine-tuned, we observe a substantially stronger correlation (r2 = 0.86). We achieve state-of-the-art performance on eight image classification tasks simply by fine-tuning state-of-the-art ImageNet architectures, outperforming previous results based on specialized methods for transfer learning.

(1) Shampoo: Preconditioned Stochastic Tensor Optimization

Looks really cool - but their implementation requires SVD and is slow for real tasks

Also they tested it only on toy tasks

arxiv.org/abs/1802.09568

github.com/moskomule/shampoo.pytorch

In real application PyTorch implementation takes 175.58s/it per batch

#deep_learning

moskomule/shampoo.pytorch

shampoo.pytorch - An implementation of shampoo


snakers4 (Alexander), May 31, 06:55

Some insights about why recent TF speech recognition challenge dataset was so poor in quality:

- petewarden.com/2018/05/28/why-you-need-to-improve-your-training-data-and-how-to-do-it/

Cool ideas

+ a cool idea - use last layer in CNN as an embedding in TB visualization + how to

#deep_learning

Why you need to improve your training data, and how to do it

Photo by Lisha Li Andrej Karpathy showed this slide as part of his talk at Train AI and I loved it! It captures the difference between deep learning research and production perfectly. Academic pape…


snakers4 (Alexander), May 30, 05:36

Transforms in PyTorch

The added a lot of useful stuff lately:

- pytorch.org/docs/master/torchvision/transforms.html

Basically this enables to build a decent pre-processing out-of box for simple tasks (just images)

I believe it will be much slower that OpenCV, but for small tasks it's ideal, if you do no look under the hood

#deep_learning

MobileNetv2

New light-weight architecture from Google with 72%+ top1

(0)

Performance goo.gl/2czk9t

Link arxiv.org/abs/1801.04381

Pre-trained implementation

- github.com/tonylins/pytorch-mobilenet-v2

- but this one took much more memory that I expected

- did not debug it

(1)

Gist - new light-weight architecture from Google with 72%+ top1 on Imagenet

Ofc Google promotes only its own papers there

No mention of SqueezeNet

This is somewhat disturbing

(2)

Novel ideas

- the shortcut connections are between the thin bottleneck layers

- the intermediate expansion layer uses lightweight depthwise convolutions

- it is important to remove non-linearities in the narrow layers in order to maintain representational power

(3)

Very novel idea - it is argued that non-linearities collapse some information.

When the dimensionality of useful information is low, you can do w/o them w/o loss of accuracy

(4) Building blocks

- Recent small networks' key features (except for SqueezeNet ones) - goo.gl/mQtrFM

- MobileNet building block explanation

- goo.gl/eVnWQL goo.gl/Gj8eQ5

- Overall architecture - goo.gl/RRhxdp

#deep_learning

snakers4 (spark_comment_bot), May 28, 08:09

A couple of neat tricks in PyTorch to make code more compact and more useful for hyper-param tuning

You may have seen that today one can use CNNs even for tabular data.

In this case you may to resort to a lot of fiddling regarding model capacity and hyper-params.

It is kind of easy to do so in Keras, but doing this in PyTorch requires a bit more fiddling.

Here are a couple of patterns that may help with this:

(0) Clever use of nn.Sequential()

self.layers = nn.Sequential(*[

ConvLayer(in_channels=channels,

out_channels=channels,

kernel_size=kernel_size,

activation=activation,

dropout=dropout)

for _ in range(blocks)

])

(1) Clever use of lists (which is essentially the same as above)

Just this construction may save a lot of space and give a lot of flexibility

modules = []

modules.append(...)

self.classifier = nn.Sequential(*modules)

(2) Pushing as many hyper-params into flags for console scripts

You can even encode something like 1024_512_256 to be passed as list to your model constructor, i.e.

1024_512_256 => 1024,512,256 => an MLP with corresponding amount of neurons

(3) (Obvious) Using OOP where it makes sense

Example I recently used for one baseline

#deep_learning

Like this post or have something to say => tell us more in the comments or donate!

Playing with MLP + embeddings in PyTorch


snakers4 (Alexander), May 25, 07:29

New competitions on Kaggle

Kaggle has started a new competition with video ... which is one of those competitions (read between the lines - blatant marketing)

www.kaggle.com/c/youtube8m-2018

I.e.

- TensorFlow Record files

- Each of the top 5 ranked teams will receive $5,000 per team as a travel award - no real prizes

- The complete frame-level features take about 1.53TB of space (and yes, these are not videos, but extracted CNN features)

So, they are indeed using their platform to promote their business interests.

Released free datasets are really cool, but only when you can use then for transfer learning, which implies also seeing the underlying ground level data (i.e. images of videos).

#data_science

#deep_learning

The 2nd YouTube-8M Video Understanding Challenge

Can you create a constrained-size model to predict video labels?


snakers4 (Alexander), May 07, 18:09

Fast.ai publishing​ a 2018 version of cutting edge course

www.fast.ai/2018/05/07/part2-launch/

Their materials are cool, but their library is questionable

#deep_learning

snakers4 (Alexander), May 04, 06:59

The current state of ML

goo.gl/rzKUiQ

(1) Do not call it AI

(2) Distinguish ML from Intelligent Infrastructure and Intelligence Augmentation

(3) Human-imitative AI is not tractable now

(4) Developments which are now being called "AI" arose mostly in the engineering fields associated with low-level pattern recognition and movement control

#deep_learning

Artificial Intelligence — The Revolution Hasn’t Happened Yet

Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and…


snakers4 (Alexander), May 01, 16:52

2018 DS/ML digest 9

Market / libraries

(0) Tensorflow + Swift - wtf - goo.gl/FDvLM4

(1) Geektimes / Habrhabr.ru going international - goo.gl/dbGNwD

(2) A service for renting GPUs ... from people

- Reddit goo.gl/HxQ54x

- Link vectordash.com/hosting/

- Looks LXC based (afaik - the only user friendly alternative to Docker)

- Cool in theory, no idea how secure this is - we can assume as secure as providing a docker container to stranger

- They did not reply me in a week

(3) A friend sent me a new list of ... new yet another PyTorch NLP libraries

- goo.gl/kasRfZ, goo.gl/XXnbJy (AllenNLP is the biggest library like this)

- I believe that such libraries are more or less useless for real tasks, but cool to know they exist

(4) New SpaceNet 4? goo.gl/CsSS6P

(5) A new super cool competition on Kaggle about particle physics? www.kaggle.com/c/trackml-particle-identification

Tutorials / basics

(0) Bias vs. Variance (RU) goo.gl/4Y7tH7

(1) Yet another magic Jupyter guideline collection - goo.gl/AFWMuq

Real world ML applications

(0) Resnet + object detection (RU) - people wo helmets 90% accuracy - goo.gl/7xpQnE

(1) Fast.ai about using embeddings with Tabular data - www.fast.ai/2018/04/29/categorical-embeddings/

Very similar to our approach on electricity

I personally do not recommend using their library by all means

(2) Comparing Google TPU vs. V100 with ResNet50 - goo.gl/s6dhsy

- speed - goo.gl/Pww2sm

- pricing - goo.gl/Rtkp8Q

- but ... buying GPUs is much cheaper

(3) Other blog posts about embeddings + tabular data

- Sales prediction blog.kaggle.com/2016/01/22/rossmann-store-sales-winners-interview-3rd-place-cheng-gui/

- Taxi drive prediction blog.kaggle.com/2015/07/27/taxi-trajectory-winners-interview-1st-place-team-%F0%9F%9A%95/

MLP + classification + embeddings - goo.gl/AMNGNG / arxiv.org/pdf/1508.00021.pdf

(4) Albu's solution to SpaceNet - augmentations github.com/SpaceNetChallenge/RoadDetector/tree/master/albu-solution/src/augmentations

CNN overview

Neural network part:

Split data to 4 folds randomly but the same number of each city tiles in every fold

Use resnet34 as encoder and unet-like decoder (conv-relu-upsample-conv-relu) with skip connection from every layer of network. Loss function: 0.8*binary_cross_entropy + 0.2*(1 – dice_coeff). Optimizer – Adam with default params.

Train on image crops 512*512 with batch size 11 for 30 epoch (8 times more images in one epoch)

Train 20 epochs with lr 1e-4

Train 5 epochs with lr 2e-5

Train 5 epochs with lr 4e-6

Predict on full image with padding 22 on borders (1344*1344).

Merge folds by mean

Jobs / job market

(0) Developers by country by scraping GitHub - goo.gl/n8gnLi

- developers count vs. GDP prntscr.com/j9v80e R^2 = 84%

- developers count vs. population - R^2 = 50%

Visualization

(0) Interactive tool for visualizing convolutions - ezyang.github.io/convolution-visualizer/

Datasets

(0) Open Images v4 outsourced

- research.googleblog.com/2018/04/announcing-open-images-v4-and-eccv-2018.html

- the dataset itself storage.googleapis.com/openimages/web/download.html

- categories storage.googleapis.com/openimages/2018_04/bbox_labels_600_hierarchy_visualizer/circle.html

#data_science

#deep_learning

#digest

tensorflow/swift

swift - Swift for TensorFlow documentation repository.


snakers4 (Alexander), May 01, 07:53

Exploring GANs and unsupervised learning

Here are my findings from my hobby project about using GANs and unsupervised methods to build some decent semantic search on a large dataset of images without annotation:

(0) spark-in.me/post/unsupervised-learning-limits

Lots of cool images.

TLDR

(0) Features from pre-trained Imagenet encoder => PCA => Umap => HDBSCAN work really well for image clusterization;

(1) Any siamese network / hard negative mining inspired methods just did not work - the annotation data is too coarse;

(2) GANs kind of work, but I could not achieve the boasted photo-realistic levels;

#deep_learning

Exploring the limits of unsupervised Machine Learning in Computer Vision

In this article I share my experience with GANs, progressive growing of GANs, image clustering and unsupervised learning Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


snakers4 (Alexander), May 01, 06:58

Showing more images in Tensorboard

TB is super cool (also in together with script gist.github.com/gyglim/1f8dfb1b5c82627ae3efcfbbadb9f514), but it shows ~10 images in its image preview.

This can be fixed.

(0) Find your TB folder

import tensorboard

tensorboard.__file__In my case it shows '/opt/conda/lib/python3.6/site-packages/tensorboard/__init__.py'

(1)

cd there

open backend/application.py

(2)

Change this line

image_metadata.PLUGIN_NAME: 400,(3)

Profit - now it shows ~400 images on each view tab

#deep_learning

Logging to tensorboard without tensorflow operations. Uses manually generated summaries instead of summary ops


snakers4 (Alexander), April 29, 18:32

Downgrading PyTorch from 0.4 to 0.3

Newest PyTorch has some issues with regards to multi-GPU operation.

If you want to install the previous version, the downgrade docs are a bit outdated, but you can simply:

conda install pytorch=0.3.0 cuda90 -c pytorch

#deep_learning