Spark in me - Internet, data science, math, deep learning, philo

snakers4 @ telegram, 1234 members, 1316 posts since 2016

All this - lost like tears in rain.

Internet, data science, math, deep learning, philosophy. No bs.

Our website
- spark-in.me
Our chat
- goo.gl/WRm93d
DS courses review
- goo.gl/5VGU5A
- goo.gl/YzVUKf

Posts by tag «deep_learning»:

snakers4 (Alexander), February 19, 10:25

One more article about usual suspects when your CNN fails to train

- blog.slavv.com/37-reasons-why-your-neural-network-is-not-working-4020854bd607

#deep_learning

37 Reasons why your Neural Network is not working

The network had been training for the last 12 hours. It all looked good: the gradients were flowing and the loss was decreasing. But then…


snakers4 (Alexander), February 18, 08:05

Even though I am preparing a large release on GAN application on real example, I just could not help sharing these 2 links.

They are just an absolute of perfection for GANs on PyTroch

- github.com/martinarjovsky/WassersteinGAN

- github.com/soumith/ganhacks

Also this is the most idiomatic PyTorch code (Imagenet finetuning) code I have ever seen

- gist.github.com/panovr/2977d9f26866b05583b0c40d88a315bf

So if you are new to PyTorch, then these links will be very useful)

#pytorch

#deep_learning

#gans

martinarjovsky/WassersteinGAN

Contribute to WassersteinGAN development by creating an account on GitHub.


Which of the latest projects did you like the most?

Still waiting for GANs – 16

👍👍👍👍👍👍👍 48%

Satellites! – 9

👍👍👍👍 27%

Nothing / not interested / missed them – 4

👍👍 12%

Jungle! – 3

👍 9%

PM me for other options – 1

▫️ 3%

👥 33 people voted so far.

snakers4 (Alexander), February 14, 11:48

2017 DS/ML digest 4

Applied cool stuff

- How Dropbox build their OCR - via CTC loss - goo.gl/Dumcn9

Fun stuff

- CNN forward pass done in Google Sheets - goo.gl/pyr44P

- New Boston Robotics robot - opens doors now - goo.gl/y6G5bo

- Cool but toothless list of jupyter notebooks with illustrations and models modeldepot.io

- Best CNN filter visualization tool ever - ezyang.github.io/convolution-visualizer/index.html

New directions / moonshots / papers

- IMPALA from Google - DMLab-30, a set of new tasks that span a large variety of challenges in a visually unified environment with a common action space

-- goo.gl/7ASXdk

-- twitter.com/DeepMindAI/status/961283614993539072

- Trade crypto via RL - goo.gl/NmCQSY?

- SparseNets? - arxiv.org/pdf/1801.05895.pdf

- Use Apple watch data to predict diseases arxiv.org/abs/1802.02511?

- Google - Evolution in auto ML kicks in faster than RL - arxiv.org/pdf/1802.01548.pdf

- R-CNN for human pose estimation + dataset

-- Website + video densepose.org

-- Paper arxiv.org/abs/1802.00434

Google's Colaboratory gives free GPUs?

- Old GPUs

- 12 hours limit, but very cool in theory

- habrahabr.ru/post/348058/

- www.kaggle.com/getting-started/47096#post271139

Sick sad world

- China has police Google Glass with face recognition goo.gl/qfNGk7

- Why slack sucks - habrahabr.ru/post/348898/

-- Email + google docs is better for real communication

Market

- Globally there are 22k ML developers goo.gl/1Jpt9P

- One more AI chip moonshot - goo.gl/199f5t

- Google made their TPUs public in beta - US$6 per hour

- CNN performance comparable to human level in dermatology (R-CNN) - goo.gl/gtgXVn

- Deep learning is greedy, brittle, opaque, and shallow goo.gl/7amqxB

- One more medical ML investment - US$25m for cancer - goo.gl/anndPP

#digest

#data_science

#deep_learning

snakers4 (Alexander), February 14, 04:54

Article on SpaceNet Challenge Three in Russian on habrhabr - please support us with your comments / upvotes

- habrahabr.ru/post/349068/

Also if you missed:

- The original article spark-in.me/post/spacenet-three-challenge

- The original code release github.com/snakers4/spacenet-three

... and Jeremy Howard from fast.ai retweeted our solution, lol

- twitter.com/alxndrkalinin/status/961268736077582336

=)

But to give some idea which pain the TopCoder platform induces on the contestants, you can read

- Data Download guide goo.gl/EME8nA

- Final testing guide goo.gl/DCvTNN

- Code release for their verification process

github.com/snakers4/spacenet-three-topcoder

#data_science

#deep_learning

#satelling_imaging

Из спутниковых снимков в графы (cоревнование SpaceNet Road Detector) — попадание топ-10 и код (перевод)

Привет, Хабр! Представляю вам перевод статьи. Это Вегас с предоставленной разметкой, тестовым датасетом и вероятно белые квадраты — это отложенная валидация...


snakers4 (Alexander), February 13, 07:55

Interesting hack from n01z3 from ODS

For getting that extra 1%

Snapshot Ensembles / Multi-checkpoint TTA:

- goo.gl/f5D2ER

- Train CNN with LR decay until convergence, use SGD or Adam

- Use cyclic LR starting to train the network from the best checkpoint, train for several epochs

- Collect checkpoints with the best loss and use them for ensembles / TTA

#deep_learning

Google TPUs are released in beta..US$200 per day?

No thank you! Also looks like only TF is supported so far.

Combined with rumours, sounds impractical.

- goo.gl/UDkE8B

#deep_learning

Cloud TPU machine learning accelerators now available in beta

By John Barrus, Product Manager for Cloud TPUs, Google Cloud and Zak Stone, Product Manager for TensorFlow and Cloud TPUs, Google Brain Team...


snakers4 (Alexander), February 10, 07:13

So we started publishing articles / code / solutions to the recent SpaceNet3 challenge. A Russian article on habrhabr.ru will also be published soon.

- The original article spark-in.me/post/spacenet-three-challenge

- The original code release github.com/snakers4/spacenet-three

... and Jeremy Howard from fast.ai retweeted our solution, lol

- twitter.com/alxndrkalinin/status/961268736077582336

=)

But to give some idea which pain the TopCoder platform induces on the contestants, you can read

- Data Download guide goo.gl/EME8nA

- Final testing guide goo.gl/DCvTNN

- Code release for their verification process

github.com/snakers4/spacenet-three-topcoder

#data_science

#deep_learning

#satelling_imaging

How we participated in SpaceNet three Road Detector challenge

This article tells about our SpaceNet Challenge participation, semantic segmentation in general and transforming masks into graphs Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


snakers4 (Alexander), February 08, 09:13

Fast.ai lesson 11 notes:

- Links

-- Video www.youtube.com/watch?v=bZmJvmxfH6I&feature=youtu.be

-- course.fast.ai/lessons/lesson11.html

- Semantic embeddings + imagenet can be powerful, but not deployable per se

- Training nets on smaller images usually works

- Comparing activation functions goo.gl/7JakeK

- lr annealing goo.gl/MEu1p8

- linear learnable colour swap trick goo.gl/DnCsHw

- adding Batchnorm goo.gl/Bh8Evh

- replacing max-pooling with avg_pooling goo.gl/1nhPvq

- lr vs batch-size goo.gl/TGJjy7

- dealing with noisy labels goo.gl/RhWzov

- FC / max-pooling layer models are better for transfer-learning?

- size vs. flops vs. speed goo.gl/GjfY4p

- cyclical learning rate paper goo.gl/3XigT9

- Some nice intuitions about mean shift clustering

-- spin.atomicobject.com/2015/05/26/mean-shift-clustering/

-- vision.stanford.edu/teaching/cs131_fall1314_nope/lectures/lecture13_kmeans_cs131.pdf

#data_science

#deep_learning

Lesson 11: Cutting Edge Deep Learning for Coders
We’ve covered a lot of different architectures, training algorithms, and all kinds of other CNN tricks during this course—so you might be wondering: what sho...

Meta research on the CNNs

(also this amazing post towardsdatascience.com/neural-network-architectures-156e5bad51ba)

An Analysis of Deep Neural Network Models for Practical Applications

arxiv.org/abs/1605.07678

Key findings:

(1) power consumption is independent of batch size and architecture;

(2) accuracy and inference time are in a hyperbolic relationship;

(3) energy constraint = upper bound on the maximum achievable accuracy and model complexity;

(4) the number of operations is a reliable estimate of the inference time

Charts

- Accuracy and param number - goo.gl/Zh6uGd

- Param efficiency - goo.gl/PrVtd5

Also a summary of architectural patterns

- goo.gl/KuS2ja

======================================

Deep Learning Scaling is Predictable, Empirically

arxiv.org/abs/1712.00409

TLDR

- goo.gl/6hdThJ

Implications

- various empirical learning curves show robust power-law region

- new architectures slightly shift learning curves downwards

- model architecture exploration should be feasible with small training data sets

- it can be difficult to ensure that training data is large enough to see the power-law learning curve region

- irreducible error region

- each new hardware generation with improved FLOP rate can pro- vide a predictable step function improvement in relative DL model accuracy

#data_science

#deep_learning

Neural Network Architectures

Deep neural networks and Deep Learning are powerful and popular algorithms. And a lot of their success lays in the careful design of the…


snakers4 (Alexander), February 07, 14:09

Following our blog post, we also posted a Russian translation of the Jungle competition to habrhabr

- habrahabr.ru/post/348540/

#data_science

#deep_learning

Соревнование Pri-matrix Factorization на DrivenData с 1ТБ данных — как мы заняли 3 место (перевод)

Привет, Хабр! Представляю вашему вниманию перевод статьи "Animal detection in the jungle — 1TB+ of data, 90%+ accuracy and 3rd place in the competition". Или...


snakers4 (Alexander), February 06, 05:23

We are starting to publish our code / solutions / articles from recent competitions (Jungle and SpaceNet three).

This time the code will be more polished / idiomatic, so that you can learn something from it!

Jungle competition

- Finally it was verified that we indeed won the 3rd place)

- drivendata.org/competitions/49/deep-learning-camera-trap-animals/

Blog posts

- spark-in.me/post/jungle-animal-trap-competition-part-one

- spark-in.me/post/jungle-animal-trap-competition-part-one

- An adaptation for habrhabr.ru will be coming soon

Code release and architecture:

- Code github.com/snakers4/jungle-dd-av-sk

- Architecture

-- 1st place (kudos to Dmytro) - simple and nice goo.gl/19S6WJ

-- Ours goo.gl/st2mGS

-- 2nd place - 4-5 levels of stacking goo.gl/wn5vEW

Please comment under posts / share / buy us a coffee!

- Buy a coffee buymeacoff.ee/8oneCIN

- Rate our channel tg://resolve?domain=tchannelsbot&start=snakers4

#data_science

#deep_learning

#competitions

Pri-matrix Factorization

Chimp&See has collected nearly 8,000 hours of footage reflecting chimpanzee habitats from camera traps across Africa. Your challenge is to build a model that identifies the wildlife in these videos.


snakers4 (Alexander), February 05, 14:57

We also managed to get into top-10 in SpaceNet3 Road Detection challenge

- goo.gl/hswjGp

(Final confirmation awaits)

Here is a sneak peak of our solution

- goo.gl/2yAZAE

A blog post + repo will follow

#data_science

#deep_learning

#satellite_imaging

#graphs

Flowchart Maker & Online Diagram Software

draw.io is a free online diagramming application and flowchart maker . You can use it to create UML, entity relationship, org charts, BPMN and BPM, database schema and networks. Also possible are telecommunication network, workflow, flowcharts, maps overlays and GIS, electronic circuit and social network diagrams.


snakers4 (Alexander), February 02, 04:58

A more concise alternative to nvidia-smi

watch --color -n1.0 gpustat --colorInstallation:

pip3 install gpustat

Also you can use python bindings for GPU drivers, but I managed to find only drivers for python2.

#linux

#deep_learning

snakers4 (Alexander), February 01, 11:25

2017 DS/ML digest 2

Libraries

- One more RL library (last year saw 1 or 2) ray.readthedocs.io/en/latest/rllib.html

- Speech recognition from facebook - github.com/facebookresearch/wav2letter

- Even better speech generation than WaveNet - goo.gl/mTwyoV - I cannot tell computer apart

Industry (overdue news)

- Nvidia does not like it's consumer GPUs deployed in data centers goo.gl/n8mkxk

- Clarifai kills forevery goo.gl/PxcjvT

- Google search and gorillas vs. black people - goo.gl/t6LwLN

Blog posts

- Baidu - dataset size vs. accuracy goo.gl/j6M5ZP (log-scale)

-- goo.gl/AYan3f

-- goo.gl/JyVNHG

Datasets

- New Youtube actions dataset - arxiv.org/abs/1801.03150

-- arxiv.org/abs/1801.03150

Papers - current topic - meta learning / CNN optimization and tricks

- Systematic evaluation of CNN advances on the ImageNet arxiv.org/abs/1606.02228

-- prntscr.com/i8il35

- TRAINING DEEP NEURAL NETWORKS ON NOISY LABELS WITH BOOTSTRAPPING arxiv.org/abs/1412.6596

-- prntscr.com/i8iq1p

- Cyclical Learning Rates for Training Neural Networks arxiv.org/abs/1506.01186

-- prntscr.com/i8iqjx

- SEARCHING FOR ACTIVATION FUNCTIONS - arxiv.org/abs/1710.05941

-- prntscr.com/i8l0sd

-- prntscr.com/i8l5dp

- Large batch => train Imagenet in 15 mins

-- arxiv.org/abs/1711.04325

- Practical analysis of CNNs

-- arxiv.org/abs/1605.07678

#digest

#data_science

#deep_learning

snakers4 (Alexander), January 30, 04:57

Simple Keras + web service deploy guidelines from FChollet + PyImageSearch

- www.pyimagesearch.com/wp-content/uploads/2018/01/keras_api_header.png

- blog.keras.io/building-a-simple-keras-deep-learning-rest-api.html?__s=jzpzanwy9jmh18omiik2

- www.pyimagesearch.com/2018/01/29/scalable-keras-deep-learning-rest-api/

Also an engineer guy from our team told me that this architecture sucks on high loads because redis will require object serialization, which takes a lot of time for images. Native python process management works better.

#data_science

#deep_learning

snakers4 (Alexander), January 29, 07:29

Some nice boilerplate on neural style transfer

- medium.com/artists-and-machine-intelligence/neural-artistic-style-transfer-a-comprehensive-look-f54d8649c199

#deep_learning

Neural Artistic Style Transfer: A Comprehensive Look

Spring Quarter of my freshman year, I took Stanford’s CS 231n course on Convolutional Neural Networks. My final project for the course…


snakers4 (Alexander), January 29, 06:00

Nice example of group convolutions via pytorch

- goo.gl/4d3JQC

#deep_learning

#pytorch

Cadene/pretrained-models.pytorch

pretrained-models.pytorch - Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.


snakers4 (Alexander), January 29, 04:45

Classic / basic CNN papers

Aggregated Residual Transformations for Deep Neural Networks (ResNeXt)

- Authors Xie Saining / Girshick Ross / Dollár Piotr / Tu Zhuowen / He Kaiming

- Link arxiv.org/abs/1611.05431

- Resnet and VGG go deeper

- Inception nets go wider. Despite efficiency - they are hard to re-purpose and design

- key idea - add group convolutions to the residual block

- illustrations

-- basic building block goo.gl/L8PjUF

-- same block in terms of group convolutions goo.gl/fZKmgf

-- overall architecture goo.gl/WWSxRv

-- performance - goo.gl/vgLN8G - +1% vs resnet

#data_science

#deep_learning

snakers4 (Alexander), January 28, 11:50

Dockerfile update for CUDA9 - CUDNN7:

- goo.gl/JwUXN5

Hello world in PyTorch and tensorflow seem to be working.

#data_science

#deep_learning

Dockerfile update


snakers4 (Alexander), January 28, 09:20

What is amazing about tf and CUDA / CUDNN drivers - that documentation is not updated when newer versions are released - and they are always changing library file names which is annoying af.

Arguably Google and Nvidia are the richest companies from the whole DS stack - but their documentations is the worst of all the richest companies.

So if you are updating your docker container and libraries suddenly start producing weird errors - look for compatibility guidelines like this one - goo.gl/cF3Swy

Of course docs and release note will have no mention of this. Because Google.

Also docker hub contains all the versions of CUDA+CUDDNN packaged, which helps

- hub.docker.com/r/nvidia/cuda/

PS

Pytorch has all this embedded into their official repo list

- prntscr.com/i6nfsl

Google, why do you make us suffer?

#deep_learning

How to install Tensorflow 1.5.0 using official pip package | Python 3.6

Hello everyone. This is going to be a tutorial on how to install tensorflow using official pre-built pip packages. In this tutorial, we will look at how to install tensorflow 1.5.0 CPU and GPU both for Ubuntu as well as Windows OS.


snakers4 (Alexander), January 27, 07:21

Best link about convolution arithmetic

-github.com/vdumoulin/conv_arithmetic/blob/master/README.md

#deep_learning

vdumoulin/conv_arithmetic

conv_arithmetic - A technical report on convolution arithmetic in the context of deep learning


snakers4 (Alexander), January 23, 17:23

Key / classic CNN papers

ShuffleNet

ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

- a small resnet-like network that uses pointwise separable covolutions and depthwise separable convolutions and a shuffle layer

- authors - Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun

- paper - arxiv.org/abs/1707.01083

- key

-- on ARM devices 13x faster that Alexnet

-- lower top1 error than MobileNet at 40 MFLOPs

- comparable to small versions of NASNET

- 2 ideas

-- use depth-wise separable convolutions for 3x3 and 1x1 convolutions

-- use shuffle layer (flatten, transpose, resize back to original dimension)

- illustrations

-- shuffle idea - goo.gl/zhTV4E

-- building blocks - goo.gl/kok7bL

-- vs. key architectures goo.gl/4usdM9

-- vs. MobileNet goo.gl/rGoPWX

-- actual inference on mobile device - goo.gl/X6vbnd

#deep_learning

#data_science

snakers4 (Alexander), January 23, 06:51

A new interesting competition on topcoder

- goo.gl/ix7xpx

At least at first glance)

#data_science

#deep_learning

Pre- register now for KONICA MINOLTA Image Segmentation Challenge

This contest aims to create new image recognition technology that could detect abnormality of a product to be used for visual inspection purpose.


snakers4 (Alexander), January 23, 04:14

snakers4 (Alexander), January 21, 05:49

A list of nice to read articles (RU)

- Nice article about credit score competition - goo.gl/7cy3Y1

- Feature engineering goo.gl/NkoxWQ

- If you a hardware strapped - RPI + Movidius stick may work for inference better that just RPI - goo.gl/HC7Uj8

#data_science

#deep_learning

Определение вероятности невозврата кредита

Пост с описанием решения конкурса на платформе SASCOMPETITIONS. Организаторы разрешили мне опубликовать код и описание логики решения, но по договору я передаю право на алгоритм и, возможно, по пер…


snakers4 (Alexander), January 20, 15:13

2017 DS/ML digest 1

Did not do digests quite for some time =)

1. Annual digests

1.1 Google Brain one - goo.gl/VQhZmP two goo.gl/XkTRhp

Highlights

- Speech generation goo.gl/MEDv7M

- Speech recognition goo.gl/tCEkVz

- Auto ML goo.gl/fx2FuP

-- NASNET - goo.gl/becAET

1.2

Posted before - but WildML 2017 summary is also awesome goo.gl/ZFtFVT

2. Datasets

→ YouTube-8M (goo.gl/nyP9gp): >7 million YouTube → videos annotated with 4,716 different classes

→ YouTube-Bounding Boxes (goo.gl/c3K6YY): 5 million bounding boxes from 210,000 YouTube videos

→ Speech Commands Dataset (goo.gl/TWsTi8): thousands of speakers saying short command words

→ AudioSet (goo.gl/TVA3LJ): 2 million 10-second → → YouTube clips labeled with 527 different sound events

→ Atomic Visual Actions (AVA) (goo.gl/Ba4U73): 210,000 action labels across 57,000 video clips

→ Open Images (goo.gl/2Xj8Xd): 9M creative-commons licensed images labeled with 6000 classes

→ Open Images with Bounding Boxes (goo.gl/qRkvMy): 1.2M bounding boxes for 600 classes

→ QuickDraw dataset (goo.gl/FSsfYm)

3.

Uber about genetic approach to neural networks - eng.uber.com/deep-neuroevolution/

#digest

#data_science

#deep_learning

#machine_learning

The Google Brain Team — Looking Back on 2017 (Part 1 of 2)

Posted by Jeff Dean, Google Senior Fellow, on behalf of the entire Google Brain Team The Google Brain team works to advance the state of ...


snakers4 (Alexander), January 17, 09:25

Nice presentation to learn about Semantic Segmentation

slides.com/vladimiriglovikov/title-texttitle-text/fullscreen#/0/5

www.youtube.com/watch?v=MYp3OwkiJAs

#data_science

#deep_learning

В.Игловиков - о сегментации, Kaggle и вообще жизни
Слайды - slides.com/vladimiriglovikov/title-texttitle-text/fullscreen#/

snakers4 (Alexander), January 10, 03:20

A 70% full GAN / style paper review:

- review spark-in.me/post/gan-paper-review

- TLDR - author.spark-in.me/gan-list.html

Did not crack math in Wasserstein GAN though.

Also a friend focused on GANS for ~6 months. Below is the gist of his work:

- GANs are known to be notoriously difficult and tricky to train even with wasserstein loss

- The most photo-realistic papers use custom regularization techniques and very sophisticated training regimes

- Seemingly photo-realistic GANs (with progressive growing)

-- are tricky to train

-- require 2-3x time to train the GAN itself and additional 3-6x to use growing

- end result may be completely unpredictable despite all the efforts

- most GANs are not viable in production / mobile applications

- visually in practice they perform much WORSE than style transfer

Training TLDR trick

- Use DCGAN just for training latent space variables w/o any domain

- Use CycleGan + wasserstein loss for domain transfer

- Use growing for photo-realism

As for using them for latent space algebra - I will do this project this year.

#deep_learning

#data_science

GAN paper list and review

In this I list useful / influential GAN papers and papers related to sparse unsupervised data CNN training / latent space operations Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


US$1 million prize US-citizen exclusive Kaggle challenge ... for just stacking Resnets?

- www.kaggle.com/c/passenger-screening-algorithm-challenge/discussion/45805

America is fucked up bad...

Also notice the shake-up and top scores

- Public goo.gl/2utoDC

- Private goo.gl/GXpnWe

#data_science

#sick_sad_worlds

snakers4 (Alexander), January 08, 06:47

A 2017 ML/DS year in review by some venerable / random authors:

- Proper year review by WildML (!!!) - www.wildml.com/2017/12/ai-and-deep-learning-in-2017-a-year-in-review/

-- Includes a lot of links and proper materials

-- AlphaGo

-- Attention

-- RL and genetic algorithm renaissance

-- Pytorch - elephant in the room, TF and others

-- ONNX

-- Medicine

-- GANs

If I had to summarize 2017 in one sentence, it would be the year of frameworks. Facebook made a big splash with PyTorch. Due to its dynamic graph construction similar to what Chainer offers, PyTorch received much love from researchers in Natural Language Processing, who regularly have to deal with dynamic and recurrent structures that hard to declare in a static graph frameworks such as Tensorflow.

Tensorflow had quite a run in 2017. Tensorflow 1.0 with a stable and backwards-compatible API was released in February. Currently, Tensorflow is at version 1.4.1. In addition to the main framework, several Tensorflow companion libraries were released, including Tensorflow Fold for dynamic computation graphs, Tensorflow Transform for data input pipelines, and DeepMind’s higher-level Sonnet library. The Tensorflow team also announced a new eager execution mode which works similar to PyTorch’s dynamic computation graphs.

In addition to Google and Facebook, many other companies jumped on the Machine Learning framework bandwagon:

- Apple announced its CoreML mobile machine learning library.

- A team at Uber released Pyro, a Deep Probabilistic Programming Language.

- Amazon announced Gluon, a higher-level API available in MXNet.

- Uber released details about its internal Michelangelo Machine Learning infrastructure platform.

- And because the number of framework is getting out of hand, Facebook and Microsoft announced the ONNX open format to share deep learning models across frameworks. For example, you may train your model in one framework, but then serve it in production in another one.- In Russian - goo.gl/z1nLzq - kind of meh review (source - goo.gl/NUQ18C)

- Amazing 2017 article about global AI trends - srconstantin.wordpress.com/2017/01/28/performance-trends-in-ai/

- Uber engineering highlights - goo.gl/jBo91k

#digest

#deep_learning

#data_science

AI and Deep Learning in 2017 – A Year in Review

The year is coming to an end. I did not write nearly as much as I had planned to. But I’m hoping to change that next year, with more tutorials around Reinforcement Learning, Evolution, and Ba…


snakers4 (Alexander), January 04, 01:27

Starting my GAN paper review series ~40% in

- spark-in.me/post/gan-paper-review

Please comment / share / provide feedback.

#data_science

#deep_learning

GAN paper list and review

In this I list useful / influential GAN papers and papers related to sparse unsupervised data CNN training / latent space operations Статьи автора - http://spark-in.me/author/snakers41 Блог - http://spark-in.me


snakers4 (Alexander), January 03, 12:20

While reading GAN papers stumbled upon even creepier pix2pix cats

- raw.githubusercontent.com/junyanz/pytorch-CycleGAN-and-pix2pix/master/imgs/edges2cats.jpg

#deep_learning

And these are my best cats

Forwarded from Alexander:

Forwarded from Alexander:
Forwarded from Alexander:

snakers4 (Alexander), January 02, 10:02

A nice repo with paper summaries (2-3 pages per paper)

- github.com/aleju/papers/tree/master/neural-nets

#deep_learning

#data_science

aleju/papers

Summaries of machine learning papers