Spark in me - Internet, data science, math, deep learning, philo

snakers4 @ telegram, 1322 members, 1482 posts since 2016

All this - lost like tears in rain.

Data science, deep learning, sometimes a bit of philosophy and math. No bs.

Our website
- spark-in.me
Our chat
- goo.gl/WRm93d
DS courses review
- goo.gl/5VGU5A
- goo.gl/YzVUKf

snakers4 (Alexander), June 18, 04:55

Playing with renewing SSL certificates + Cloudflare

I am using certbot, which makes SSL certificate installation for any web-server literally a one-liner (a couple of guides - goo.gl/nP2tij / goo.gl/X6rVxs).

It also has an amazing command certbot renew for renewing your certificates.

Unsurprisingly, it does not work, when you have Cloudflare enabled. The solution in my case was as easy as:

- falling back to registrar's name-servers (luckily, my registrar stores its old DNS zone settings)

- certbot renew

- reverting back to cloudflare's DNS servers

- also, in this case when using VPN I did not have to wait for DNS records to propagate - it was instant

#linux

How To Use Certbot Standalone Mode for Let's Encrypt Certificates | DigitalOcean

Certbot offers a variety of ways to validate your domain, fetch certificates, and automatically configure Apache and Nginx. In this tutorial, we'll discuss Certbot's standalone mode and how to use it to secure other types of services, such as a mail s


snakers4 (Alexander), June 12, 16:14

youtu.be/SWW0nVQNm2w

Neural Image Stitching And Morphing | Two Minute Papers #256
The paper "Neural Best-Buddies: Sparse Cross-Domain Correspondence" is available here: arxiv.org/abs/1805.04140 Pick up cool perks our Patreon page: ...

snakers4 (spark_comment_bot), June 12, 11:22

The age of open-source

Recently I started using more and more open-source / CLI tools for mundane everyday tasks.

Sometimes they have higher barriers to entry (example - compare google slides vs markdown + latex), but usually more simplistic, yet more powerful.

Recently I was just appaled by MuTorrent bugs and ads - and I just found out that there is even a beta of Transmission for Windows (the alternative being - just using transmission daemon on Linux).

The question is - do you know any highly useful open-source / CLI / free tools to replace standard entrenched software, which is getting a bit annoying?

Like this post or have something to say => tell us more in the comments or donate!

snakers4 (Alexander), June 12, 10:48

Interesting links about Internet

- Ben Evans' digest - goo.gl/7NkYn6

- Why it took so much time to create previews for Wikipedia - goo.gl/xg7N99

- Google postulating its AI principles? blog.google/topics/ai/ai-principles/

- Google product alternatives - goo.gl/RmA76N - I personally started to switch to more open-source stuff lately, but Docs and Android have no real options

- The future of ML in embedded devices - goo.gl/PjWpKj (sound ideas, but a post is by an evangelist)

- Yahoo messenger shutting down (20 years!) - goo.gl/uhomds - hi ICQ

- Microsoft Buys GitHub for $7.5 Billion - 16z write-up - goo.gl/3znstT

- NYC medallions dropped 5x in price - goo.gl/Vi7pG6

- JD covers villages in China with drone delivery already - goo.gl/bMGKSY

#digest

snakers4 (Alexander), June 10, 15:35

And now the habr.ru article is also live -

habr.com/post/413775/

Please support us with your likes!

#deep_learning

#data_science

Состязательные атаки (adversarial attacks) в соревновании Machines Can See 2018

Или как я оказался в команде победителей соревнования Machines Can See 2018 adversarial competition. Суть любых состязательных атак на примере. Так уж...


snakers4 (Alexander), June 10, 06:50

An interesting idea from a CV conference

Imagine that you have some kind of algorithm, that is not exactly differentiable, but is "back-propable".

In this case you can have very convoluted logic in your "forward" statement (essentially something in between trees and dynamic programming) - for example a set of clever if-statements.

In this case you will be able to share both of the 2 worlds - both your algorithm (you will have to re-implement in your framework) and backprop + CNN. Nice.

Ofc this works only for dynamic deep-learning frameworks.

#deep_learning

#data_science

Machines Can See 2018 adversarial competition

Happened to join forces with a team that won 2nd place in this competition

- spark-in.me/post/playing-with-mcs2018-adversarial-attacks

It was very entertaining and a new domain to me.

Read more materials:

- Our repo github.com/snakers4/msc-2018-final

- Our presentation drive.google.com/file/d/1P-4AdCqw81nOK79vU_m7IsCVzogdeSNq/view

- All presentations drive.google.com/file/d/1aIUSVFBHYabBRdolBRR-1RKhTMg-v-3f/view

#data_science

#deep_learning

#adversarial

Playing with adversarial attacks on Machines Can See 2018 competition

This article is about MCS 2018 competition and my participation in it, adversarial attack methods and how out team won Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


snakers4 (Alexander), June 09, 15:37

youtu.be/tU484zM3pDY

NVIDIA's AI Removes Objects From Your Photos | Two Minute Papers #255
Pick up cool perks our Patreon page: www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generou...

snakers4 (Alexander), June 07, 14:13

snakers4 (Alexander), June 07, 03:13

github.com/keras-team/keras/releases/tag/2.2.0

New keras follows PyTorch's OOP style model definitions?)

keras-team/keras

keras - Deep Learning for humans


snakers4 (Alexander), June 06, 19:31

youtu.be/EQX1wsL2TSs

Impersonate Anyone With This Technique | Two Minute Papers #254
The paper "HeadOn: Real-time Reenactment of Human Portrait Videos" is available here: niessnerlab.org/projects/thies2018headon.html Our Patreon page: ...

snakers4 (spark_comment_bot), June 06, 07:55

2018 DS/ML digest 11

Datasets

(0)

New Andrew Ng paper on radiology datasets

YouTube 8M Dataset post

As mentioned before - this is more or less blatant TF marketing

New papers / models / architectures

(0) Google RL search for optimal augmentations

- Blog, paper

- Finally Google paid attention to augmentations

- 83.54% top1 accuracy on ImageNet

- Discrete search problem, each policy consists of 5 sub-policies each each operation associated with two hyperparameters: probability and magnitude

- Training regime cosine decay for 200 epochs

- Top accuracy on ImageNet

- Best policy

- Typical examples of augmentations

(1)

Training CNNs with less data

Key idea - with clever selection of data you can decrease annotation costs 2-3x

(2)

Regularized Evolution for Image Classifier Architecture Search (AmoebaNet)

- The first controlled comparison of the two search algorithms (genetic and RL)

- Mobile-size ImageNet (top-1 accuracy = 75.1% with 5.1M parameters)

- ImageNet (top-1 accuracy = 83.1%)

Evolution vs. RL at Large-Compute Scale

• Evolution and RL do equally well on accuracy

• Both are significantly better than Random Search

• Evolution is faster

But the proper description of the architecture is nowhere to be seen...

Libraries / code / frameworks

(0) OpenCV installation for Ubuntu18 from source (if you need e.g. video support)

News / market

(0) Idea adversarial filters for apps - goo.gl/L4Vne7

(1) A list of 30 best practices for amateur ML / DL specialits - forums.fast.ai/t/30-best-practices/12344

- Some ideas about tackling naive NLP problems

- PyTorch allegedly supports just freezing bn layers

- Also a neat idea I tried with inception nets - assign different learning rates to larger models when fine-tuning them

(2) Stumbled upon a reference on NAdam as optimizer as being a bit better than Adam

It is also described in this popular article

(3) Barcode reader via OpenCV

#deep_learning

#digest

Like this post or have something to say => tell us more in the comments or donate!

snakers4 (Alexander), June 05, 14:42

A very useful combination in tmux

You can resize your panes via pressing

- first ctrl+b

- hold ctrl

- press arrow keys several time holding ctrl

...

- profit

#linux

#deep_learning

Digest about Internet

(0) Ben Evans Internet digest - goo.gl/uoQCBb

(1) GitHub purchased by Microsoft - goo.gl/49X74r

-- If you want to migrate - there are guides already - about.gitlab.com/2018/06/03/movingtogitlab/

(2) And a post on how Microsoft kind of ruined Skype - goo.gl/Y7MJJL

-- focus on b2b

--lack of focus, constant redesigns, faltering service

(3) No drop in FB usage after its controversies - goo.gl/V93j2v

(4) Facebook allegedly employes 1200 moderators for Germany - goo.gl/VBcYQQ

(5) Looks like many Linux networking tools have been outdated for years

dougvitale.wordpress.com/2011/12/21/deprecated-linux-networking-commands-and-their-replacements/

#internet

#digest

snakers4 (Alexander), May 31, 11:19

Forwarded from Админим с Буквой:

Разница между apt & apt-get

itsfoss.com/apt-vs-apt-get-difference/

#read #apt #thirdparty

Difference Between apt and apt-get Explained | It's FOSS

Explaining how apt command is similar yet different than apt-get and why you should be using apt instead of apt-get.


snakers4 (Alexander), May 31, 07:25

New cool papers on CNNs

(0) Do Better ImageNet Models Transfer Better?

An implicit hypothesis in modern computer vision research is that models that perform better on ImageNet necessarily perform better on other vision tasks.

However, this hypothesis has never been systematically tested.

- Wow an empiric study why ResNets rule - they are just better non-finetuned feature extractors and then are probably easier to fine-tune

- ResNets are the best fixed feature extractors

- Also ImageNet pretraining accelerates convergence

- Also my note is that inception-based models are more difficult to fine-tune.

- Among top ranking models are - Inception, NasNet, AmoebaNet

- Also my personal remark - any CNN architecture can be ft-ed to be relatively good, you just need to invent a proper training regime

Just the abstract says it all

Here, we compare the performance of 13 classification models on 12 image classification tasks in three settings: as fixed feature extractors, fine-tuned, and trained from random initialization. We find that, when networks are used as fixed feature extractors, ImageNet accuracy is only weakly predictive of accuracy on other tasks (r2 = 0.24). In this setting, ResNets consistently outperform networks that achieve higher accuracy on ImageNet. When networks are fine-tuned, we observe a substantially stronger correlation (r2 = 0.86). We achieve state-of-the-art performance on eight image classification tasks simply by fine-tuning state-of-the-art ImageNet architectures, outperforming previous results based on specialized methods for transfer learning.

(1) Shampoo: Preconditioned Stochastic Tensor Optimization

Looks really cool - but their implementation requires SVD and is slow for real tasks

Also they tested it only on toy tasks

arxiv.org/abs/1802.09568

github.com/moskomule/shampoo.pytorch

In real application PyTorch implementation takes 175.58s/it per batch

#deep_learning

moskomule/shampoo.pytorch

shampoo.pytorch - An implementation of shampoo


snakers4 (Alexander), May 31, 06:55

Some insights about why recent TF speech recognition challenge dataset was so poor in quality:

- petewarden.com/2018/05/28/why-you-need-to-improve-your-training-data-and-how-to-do-it/

Cool ideas

+ a cool idea - use last layer in CNN as an embedding in TB visualization + how to

#deep_learning

Why you need to improve your training data, and how to do it

Photo by Lisha Li Andrej Karpathy showed this slide as part of his talk at Train AI and I loved it! It captures the difference between deep learning research and production perfectly. Academic pape…


snakers4 (Alexander), May 30, 10:03

Forwarded from Just links:

github.com/Randl/MobileNetV2-pytorch

My implementation of MobileNetV2 - currently top low computational model - on PyTorch 0.4. RMSProp didn't work (I have feeling there are issues with it in PyTorch), so training is with SGD (scheme similar to ShuffleNet's one - reducing lr after 200 and 300 epochs). The results are a bit better that claimed in paper and achieved by other repos - 72.1% top1. Supports any scaling factor/input size (divisible by 32) as described in paper.

Randl/MobileNetV2-pytorch

MobileNetV2-pytorch - Impementation of MobileNetV2 in pytorch


snakers4 (Alexander), May 30, 05:36

Transforms in PyTorch

The added a lot of useful stuff lately:

- pytorch.org/docs/master/torchvision/transforms.html

Basically this enables to build a decent pre-processing out-of box for simple tasks (just images)

I believe it will be much slower that OpenCV, but for small tasks it's ideal, if you do no look under the hood

#deep_learning

MobileNetv2

New light-weight architecture from Google with 72%+ top1

(0)

Performance goo.gl/2czk9t

Link arxiv.org/abs/1801.04381

Pre-trained implementation

- github.com/tonylins/pytorch-mobilenet-v2

- but this one took much more memory that I expected

- did not debug it

(1)

Gist - new light-weight architecture from Google with 72%+ top1 on Imagenet

Ofc Google promotes only its own papers there

No mention of SqueezeNet

This is somewhat disturbing

(2)

Novel ideas

- the shortcut connections are between the thin bottleneck layers

- the intermediate expansion layer uses lightweight depthwise convolutions

- it is important to remove non-linearities in the narrow layers in order to maintain representational power

(3)

Very novel idea - it is argued that non-linearities collapse some information.

When the dimensionality of useful information is low, you can do w/o them w/o loss of accuracy

(4) Building blocks

- Recent small networks' key features (except for SqueezeNet ones) - goo.gl/mQtrFM

- MobileNet building block explanation

- goo.gl/eVnWQL goo.gl/Gj8eQ5

- Overall architecture - goo.gl/RRhxdp

#deep_learning

snakers4 (Alexander), May 28, 08:39

Dealing with class imbalance with CNNs

For small datasets / problems - oversampling works best, for large dataset - it's unclear

- arxiv.org/abs/1710.05381

Interestingly enough, they did not test oversampling + augmentations.

snakers4 (spark_comment_bot), May 28, 08:09

A couple of neat tricks in PyTorch to make code more compact and more useful for hyper-param tuning

You may have seen that today one can use CNNs even for tabular data.

In this case you may to resort to a lot of fiddling regarding model capacity and hyper-params.

It is kind of easy to do so in Keras, but doing this in PyTorch requires a bit more fiddling.

Here are a couple of patterns that may help with this:

(0) Clever use of nn.Sequential()

self.layers = nn.Sequential(*[

ConvLayer(in_channels=channels,

out_channels=channels,

kernel_size=kernel_size,

activation=activation,

dropout=dropout)

for _ in range(blocks)

])

(1) Clever use of lists (which is essentially the same as above)

Just this construction may save a lot of space and give a lot of flexibility

modules = []

modules.append(...)

self.classifier = nn.Sequential(*modules)

(2) Pushing as many hyper-params into flags for console scripts

You can even encode something like 1024_512_256 to be passed as list to your model constructor, i.e.

1024_512_256 => 1024,512,256 => an MLP with corresponding amount of neurons

(3) (Obvious) Using OOP where it makes sense

Example I recently used for one baseline

#deep_learning

Like this post or have something to say => tell us more in the comments or donate!

Playing with MLP + embeddings in PyTorch


snakers4 (Alexander), May 26, 16:14

youtu.be/KL6U6iasUxs

AI-Based Large-Scale Texture Synthesis | Two Minute Papers #252
Pick up cool perks on our Patreon page: www.patreon.com/TwoMinutePapers One-time payment links and crypto addresses are available below. Thank you ve...

snakers4 (Alexander), May 25, 07:29

New competitions on Kaggle

Kaggle has started a new competition with video ... which is one of those competitions (read between the lines - blatant marketing)

www.kaggle.com/c/youtube8m-2018

I.e.

- TensorFlow Record files

- Each of the top 5 ranked teams will receive $5,000 per team as a travel award - no real prizes

- The complete frame-level features take about 1.53TB of space (and yes, these are not videos, but extracted CNN features)

So, they are indeed using their platform to promote their business interests.

Released free datasets are really cool, but only when you can use then for transfer learning, which implies also seeing the underlying ground level data (i.e. images of videos).

#data_science

#deep_learning

The 2nd YouTube-8M Video Understanding Challenge

Can you create a constrained-size model to predict video labels?


snakers4 (Alexander), May 23, 13:01

Using groupby in pandas in multi-thread fashion

Sometimes you just need to use all of your CPUs to process some nasty thing in pandas (because you are lazy to do it properly) quick and dirty.

Pandas' GroupBy: Split, Apply, Combine seems to have been built exactly for that, but there is also a lazy workaround.

Solution I googled

- gist.github.com/tejaslodaya/562a8f71dc62264a04572770375f4bba

My lazy way using tqdm + Pool

- gist.github.com/snakers4/b246de548669543dc3b5dbb49d4c2f0c

(Savva, if you read this, I know that your version is better, you can also send it to me to share xD)

#ds

pandas DataFrame apply multiprocessing


snakers4 (Alexander), May 23, 12:52

(RU) A cool post series on habr about auto-encoders

habr.com/post/331382/

#ds

#ml

#dl

Автоэнкодеры в Keras, Часть 1: Введение

Содержание Часть 1: Введение Часть 2: Manifold learning и скрытые (latent) переменные Часть 3: Вариационные автоэнкодеры (VAE) Часть 4: Conditional VAE...


snakers4 (spark_comment_bot), May 21, 06:21

2018 DS/ML digest 11

Cool thing this week

(0) ML vs. compute stidy since 2012 - chart / link

Market

(0) Once again about Google Duplex

(1) Google announcements from Google IO

-- Email autocomplete

We encode the subject and previous email by averaging the word embeddings in each field. We then join those averaged embeddings, and feed them to the target sequence RNN-LM at every decoding step, as the model diagram below shows.

-- Learning Semantic Textual Similarity from Conversations blog, paper. Something in the lines of Sentence2Vec, but for conversations, self-supervised, uses attention and embedding averaging

-- Google Clips device + interesting moment estimation on the device. Looks like MobileNet distillation into a small network with some linear models on top

Libraries / tools / papers

(0) SaaS NLP annotation tool

(1) CNNs allegedly can reconstruct low light images? Blog, paper, Looks cool AF

(2) Cool thing to try in a new project - postgres restful API wrapper - such things require a lot of care though, but can elimininate a lot of useless work for small projects.

For my blog I had to write a simple business tier layer myself. I doubt that I could use this w/o overengineering because I constructed open-graph tags for example in SQL queries for example

Job / job market

(0) (RU) Realistic IT immigration story

Datasets

(0) Last week open images dataset was updated. I downloaded the small one for the sake of images. Though the download process itself is a bit murky

#machine-learning

#digest

#deep-learning

Like this post or have something to say => tell us more in the comments or donate!

snakers4 (Alexander), May 19, 14:32

A thorough and short guide to Matplotlib API

A bit of history, small look under the hood and logical explanation of how to use it best:

realpython.com/python-matplotlib-guide/

#data_science

Python Plotting With Matplotlib (Guide) – Real Python

This article is a beginner-to-intermediate-level walkthrough on Python and matplotlib that mixes theory with example.


snakers4 (Alexander), May 18, 17:47

A very cool sci-fi story about ethics, game theory and philosophy

www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8

#philosophy

Three Worlds Collide (0/8)

"The kind of classic fifties-era first-contact story that Jonathan Swift might have written, if Jonathan Swift had had a background in game theory." -- (Hugo nominee) Peter Watts, "... (Read More)


snakers4 (Alexander), May 18, 06:59

Using ncdu with exclude

A really good extension of standard du

sudo ncdu --exclude /exclude_folder /

Useful when something is mounted in /media or /mnt

#linux

snakers4 (spark_comment_bot), May 17, 09:55

Playing with 3D interactive scatter plots

Turns out you can do this using ipython widgets + ipyvolume.

Best example:

- Playing with particle data (nbviewer.jupyter.org/urls/gist.githubusercontent.com/maartenbreddels/04575b217aaf527d4417173f397253c7/raw/926a0e57403c0c65eb55bc52d5c7401dc1019fdf/trackml-ipyvolume.ipynb)

All of this looks kind of wobbly / new and a bit useless, but it works, is free and fast.

I was also trying to assing each point a colour like here

But in the end a much more simple approach just worked

fig = ipv.figure()

N = len(hits.volume_id.unique())

cmap = matplotlib.cm.get_cmap("tab20", N)

colors = cmap(np.linspace(0, 1.0, N))

colors = ["#%02x%02x%02x" % tuple([int(k*255) for k in matplotlib.colors.to_rgb(color)[:3]]) for color in colors]

for i in range(0,N):

hits_v = hits[hits.volume_id == list(hits.volume_id.unique())[i]]

scatter = ipv.scatter(hits_v.x, hits_v.y, hits_v.z, marker="diamond", size=0.1, color=colors[i])

ipv.show()

Like this post or have something to say => tell us more in the comments or donate!

snakers4 (Alexander), May 16, 13:21

A very cool, fast and simple way to make presentations

www.youtube.com/watch?v=dum7q6UXiCE

The Easiest Way to Make Presentations! (Pandoc + Markdown)
Support the channel!: Patreon.com/LukeSmith Give me money: PayPal.me/LukeMSmith Ask a question: [email protected] Get my configs: gi...

snakers4 (Alexander), May 15, 17:11

youtu.be/fklY2nH7AJo

AI Learns Painterly Harmonization | Two Minute Papers #249
The paper "Deep Painterly Harmonization" and its source code is available here: arxiv.org/abs/1804.03189 github.com/luanfujun/deep-painterly-...