May 30, 2018

Transforms in PyTorch

The added a lot of useful stuff lately:

- pytorch.org/docs/master/torchvision/transforms.html

Basically this enables to build a decent pre-processing out-of box for simple tasks (just images)

I believe it will be much slower that OpenCV, but for small tasks it's ideal, if you do no look under the hood

#deep_learning

MobileNetv2

New light-weight architecture from Google with 72%+ top1

(0)

Performance goo.gl/2czk9t

Link arxiv.org/abs/1801.04381

Pre-trained implementation

- github.com/tonylins/pytorch-mobilenet-v2

- but this one took much more memory that I expected

- did not debug it

(1)

Gist - new light-weight architecture from Google with 72%+ top1 on Imagenet

Ofc Google promotes only its own papers there

No mention of SqueezeNet

This is somewhat disturbing

(2)

Novel ideas

- the shortcut connections are between the thin bottleneck layers

- the intermediate expansion layer uses lightweight depthwise convolutions

- it is important to remove non-linearities in the narrow layers in order to maintain representational power

(3)

Very novel idea - it is argued that non-linearities collapse some information.

When the dimensionality of useful information is low, you can do w/o them w/o loss of accuracy

(4) Building blocks

- Recent small networks' key features (except for SqueezeNet ones) - goo.gl/mQtrFM

- MobileNet building block explanation

- goo.gl/eVnWQL goo.gl/Gj8eQ5

- Overall architecture - goo.gl/RRhxdp

#deep_learning