22/05/2015

Deep learning talk @Zalendo Tech Event

First Zalendo Tech Event at their Tech HQ nearby Alexanderplatz yesterday evening. To open their series of Meetup event Zalendo invited Professor Sepp Hochreiter of Johannes Kepler University in Linz to talk about deep learning. 

attentive crowd

About the talk
The talk was good but not adapted to an academic audience. If you are familiar with the topic you probably wouldn't have learned something new. But the talk did lead to interesting - and often expected - questions around and about deep learning. Sadly - to me - it was more where does it work?, what are the best parameters? than how does it work actually? 

As the speaker did remind to us, neural networks (NNs) aren't new on the market. They were discoveries years ago, it was promising and then nothing, other techniques were used, leaving specialists in their niche. I do remember courses during my master in image processing about 15 years ago [in Pierre et Marie Curie Paris VI] where the person teaching and introducing KNN and NNs sounds both excited and disenchanted. This until computers got faster (thanks to cpu, gpu, many-core, cluster, graphic card programming "et j'en passe") and suddenly it was possible to use NNs, to get results, to reproduce them and to beat classification challenges by far comparing to the expert of the field.

For every new promising technique there is the temptation to use if for everything in a brute force manner. But it doesn't work all the time. One remark given by the speaker is these solutions work when you are overloaded with data, when you immersed into data. It's not a surprise that big players such as Google, Facebook, Amazon and more are heavy on growing their deep learning team.

About automation, AI and drugs
You hear and see more and more presentations about deep learning, artificial intelligence (AI) where people are dreaming of AI being able to put words on a given image in a similar way a human will do. It's kind of working but there is no magic. It made me remember about an experiment where the researchers claimed to be able to produce images/video corresponding to the images we see in our dreams. Often people fear - and they can - about computer taking control over us, making decisions for us until we start working for them.

It is interesting to understand why pharmacy companies - those making drugs - are so big into deep learning. Bio-Informatics offer the perfect environment for developing big data solution. Here I'm not talking about the phase where drug need to be tested and evaluated on human but what happen before. Biology and chemistry (or computer chemistry) can be simulated using pretty accurate models, meaning you don't need to run an actual biological or chemical experiment. You can simulate the experiment, generate a huge amount of data and let your algorithm do the analysis. And guess what, computer vision, machine learning, deep learning - not to mention optimization - are part of the solution. And the faster you get your results, the faster you have a new drug to potentially introduce on the market hopefully before your competitor. I'm not sure "normal" people got a glimpse on that side of research, in that field it's actually the biological/chemical experiment that will validate a virtual experiment (remember to watch Terminator 4 or 5 at leas the last on screen...).

About the big brain project and graphic cards and evolution
Research is cool. It's very interesting to see how connections/links between highly specialized fields are happening to build a new framework for research. The big brain project (not sure about the name but there is the US and the EU version) is the perfect example, different fields from neuroscientists to computer graphics and hardware manufacturers need to collaborate to build this virtual brain model.

One of the last comment from the speaker yesterday had a pertinent echo in my head. This comment illustrates perfectly how technology is evolving and frameworks are crossing their paths. He told us that graphic card manufacturer (such as nvidia to not name them) are now developing hardware dedicated to run deep learning process, once again the hardware architecture helping to fasten a programmed algorithm. But until when and is it a good approach? 

Years ago and not so long time ago when computers were already getting faster, people were designing hardware to run image processing/computer vision algorithms. This because the computers in their at-this-time state weren't fast enough. Like the brain was too small and needed to grow or modify its physical body to evolve. But then computer became faster and those special design weren't adaptable enough, too specialized. I feel that we are living a similar state with deep learning. The question will be is hyper-specialization of computer hardware the solution - momentarily for sure - for deep learning or not?

About the future
We are all doomed. Soon computer will be smart enough to redesign their body when they will reach their limits to overpass them. I haven't any spoiler about how and when, out Mayan friends had a big fail about it three years ago, we have to be patient.

  

2 commentaires:

Unknown a dit…

This was not the first meetup that ZalandoTech hosts. The ones that I remember and that had been quite popular as well are the AWS Meetup and PostgreSQL meetup :-)

mrbonsoir a dit…

My bad Valentine and thanks for rectifying my mistake. I probably read a bit to fast the description... I'm now waiting for the next Tech Event there.