0603 Introduction to Deep Learning on Hadoop

good afternoon everybody uh still getting their coffee fix yeah is it that time huh these conferences at this point I go find a corner and I go take a nap like a noob world No welcome to deep learning on I do let's talk about some math today cuz I know you're already a little sleepy so I'm gonna go ahead and just put you on down all right so there's two of us today we're tag-team in this we're going for the title and in tag-team machine learning talks today this is Adam Gibson over here this is Adam this will be on SlideShare later I'm not gonna talk about how wonderful and smart Adam is you'll figure it out very very soon this is me you can figure out how great I am later on this is what I used to do smart grid I dupa dupa do blah blah blah you're not here for that let's talk about to do and deep learning so let's break this down into four realms I want to talk about what is what is deep learning because he's here in the media lot right now like on Wall Street Journal Yahoo what have you a lot of articles a lot of hype where does the rubber hit the road and how can you actually use that on your Hadoop deployment today and digging further into that the type of deep learning we've implemented deep belief networks how they're implementing HIPAA and how the code sits on Hadoop and yarn and then the results of the techniques that we've implemented that you can use today click all right come on stragglers coffee wasn't that good all right so what is deep learning it's not magic and I'm gonna bust some bubbles it's not artificial intelligence I used to be a GRA and we would laugh about a I would read Norfolk's book about Ava but it's it's an algorithm it's it's high grade machine learning that tries to learn simple features from lower levels and build up those features and learn more complex features at higher higher levels of what's effectively a very exotic neural network it reduces the problems complexity as it moves up through the layers in the neural network and it really it introduces an unser supervise component to a supervisor worlds like neural networks are traditionally with backpropagation they're based around the concepts of using some sort of gradient descent and calculating loss function to update the weights and here there's there's two parts to the training and it has and that would be a supervised that normal neural net learning technique would be a supervised learning technique the interesting property that some of the researchers such as Hinton have introduced is an unsupervised technique which is very interesting in places where we have a ton of unlabeled data because unless you have just fleets of graduate students you don't typically have a lot of label data right it's just like go find take these million instances and go label them for me which takes months years I don't know so supervised versus us with us unsupervised on places where there's lots of data that's unlabeled such as a Hadoop cluster ingesting large amounts of data kind of becomes interesting it also introduces some ways to understand to do vectorization better as opposed to you give it a more raw representation and it just says yeah I'm gonna build your features the way I want to as opposed to feature creation which can be very very delicate involve lots of secret sauce I'm quite honestly kind of a black art so what we're doing here is we're chasing nature and giving my background that's the first slide I I've always had a affinity for that when I spent a couple years in grad school studying and algorithms right because ants are really great at computers right no no they're not but they are the best distributed system in nature bottom-up self-organization quantitative quantitative sigmaji things like that so biologically inspired algorithms have always been something I really got a kick out of neural nets on and off over the years I thought they're interesting and then when some of the later the later deep learning research came back up I got more back into it and so an interesting thing about deep learning is it it it's learning sparse representations for auditory signals and this is directly from some research papers and it the filters closely represent neurons in early audio processing in mammals so it's it's I'm not gonna sit here say it's a brain but it has some similar properties and how it it it does audio processing and that's pretty interesting so it's biologically inspired but it's not a brain when applied to speech learn representation showed a striking resemblance to the cochlear filters in Ella in in the auditory cortex there again biological inspired very interesting and how it was learned and so young Geoffrey Hinton is one top researcher Yann Lacan he's probably has seen his he followed machine learning he is a researcher at Google excuse me Facebook my bad yeah okay getting across so he is he's one of the top guys in the game and he has a lot of opinion aides opinions about deep learning and he's common in one paper how acoustic modeling deep learning has become the dominant technique and so something that really appealed to me is is I watching deploying research as you read research papers and machine learning you go who's winning the benchmark contest who's winning on recommendation he was winning on the Timnath database for audio recognition things like that deep learning consistently over the past five years have been climbing those charts object recognition object session semantics segmentation things like that deep learning his begin it why I say it's it's higher grade machine learning it's began to climb over that that previous top accuracy peak push it up farther in machine learning as you can see in things like like cars anybody ever worked on a DARPA Grand Challenge I did and it was like very Unger because I try to do like with open CV I used we did try to do object detection in real-time and some of the techniques weren't and on some of the poor video couldn't pick out anything and so you can go and head and get some more tabs to a car who can't see don't be on the test track talk I think yeah so but whenever your machine learning techniques begin to get above certain thresholds their value changes and also you have cars running around Mountain View on their own and so that's that's an example of how as we drive up accuracy our value proposition begins to change uh there we go okay so what is a deep belief network so you know we've seen the traditional multi-layer feed-forward networks or I think we have how many of you actually seen them multi-layer feed-forward yeah there we go so a couple of us have seen them so those have been around since the 80s you know we had hit and he created you know he was essentially one of the grandfather's of neural networks he literally just he invented a lot of the baseline techniques and he was also the innovator of what he had met like what Josh mentioned earlier today being able to learn semantic representations of arbitrary pieces of data very similar to how we perceive things today or just how we perceive things that are in our minds so that being said a deep belief network is a generator of problem probabilistic graphical model so what does that mean it's a feed-forward network it's a set of what we'll discuss later restrictive Bolton machines that learn a set of features and then you just have some sort of an output layer that output layer is typically logistic regression could be support vector machines relative to the objective function it can do regression it can even learn how to actually construct arbitrary pieces of data from just ten numbers that's called a deep autoencoder take notes on that later so a deep belief network the general idea is that you generatively learn sets of features and and then from there you have the next layer that learns the probability distribution of the previous layers features etc at the end of it you get a weight and representation of data that is actually capable of representing features better than humans can half of the data scientists job is saying what are good correlations and the data what are the trends what are ways I can treat take this data and just do something with it you know this could you know this is this is immediately evident in computer vision we have to do all sorts of image transforms like at you know a soul will filter for edge detection Gaussian kernels to you know make an image you know highlight features that otherwise might not be there in in in natural linkage processing we typically have to deal with all sorts of random features tokens whatever so deep learning deep belief networks learn these features so I said I would explain what we're strict to Boltzmann machines are so a restricted Boltzmann machine is essentially a generative model that you have an input layer you have the visible in the hidden the visible and the hidden clamp you know they they actually help each other wearing so the idea is that a restrictive Boltzmann machine is actually an energy model that just says okay this and this they don't match yet they don't match yet and what it does is it repeatedly samples its from itself in order to understand do I have a good representation of the data the hidden words to mirror the visible the visible samples from the employed in this case diff the major problem with restrictive Boltzmann machines can be these energy functions though where you can only represent binary or continuous inputs you actually have to modify the RB m such that it can't it can handle different kinds of data so just a little bit on a project that I you know that I kind of came on with Josh here is D pointing for J so deporting for J is essentially a distributed runtime for deep learning built on top of Scala it actually it it's basically self-contained deep learning obviously that's not relevant to Hadoop though but we can scale it with Hadoop the idea we can make we can actually make deep warning a yarn app a yarn app they can scale to arbitrary amounts of Google level data you know that Google and Google has used D pointing at scale for years now so the idea here was to model what would be an ideal distributed system and try to figure out what all the logistics are at runtime in this case it takes it talks to any data source sets itself up run some distribute its you know run some distribute calculations and gives you just a nice lightweight model the idea would be the same with the with yarn on HDFS so one key thing here with deep learning is everything is vectorized in machine learning typically what you'll do you'll take some data you'll turn it into what's called a feature vector a feature vector for those of you don't know is just a representation of data such that you know so say I have housing prices and I want to know what you know if I'm trying to predict a housing price I want the price and then I want the label so that price is a feature so if we imagine n-dimensional like you know we have a hundred different observations about a piece of data for a task we're trying to achieve that's that's all done in in vectors and matrices so one of the things that vector is a vectorized implementations allow us to do is say oh here's all these examples let's use some nice little linear algebra and we can solve a lot we can we can actually solve all these equations and all that all at once so in this case GPUs are really good at that CPU bound tasks can be as well they're not as fast but CPUs don't quite you know they might not be as fast but they're a lot more accessible so the idea is that regardless of what we do everything's a matrix so traditionally deep learning tool kits have been very research driven there's nothing that's been easy to use so you know we see that in these Python implementations Theano a lot of the in a lot of the libraries that are wrapped around that so another one would be torch that's written in Lua a lot of these systems are not meant for distributed runtimes they're meant to run on laptops and you know they run very fast deep learning can be very slow to train if you don't do it at scale which is why again when hit whenever hitting talks he's like well you know this model it's like 98% accuracy but it took a month to train who wants to wait a month that's Hadoop that's what we're using Hadoop for let's throw CPUs at it oops good at what's leverage I so what are good applications of deep learning so we've seen you know as Josh mentioned earlier there have been all record-breaking tasks that have been achieved through deep learning you know em this is optical character recognition taking the number zero through nine and there's 60,000 of them and actually getting overnight you know 99% accuracy on human handwriting audio processing using moving window techniques you know taking slicing up parts of a signal deep learning has actually been able to identify you know different elements of speech in tech you know within sound waves and IV in in text and NLP what a you know what it's capable of doing is saying oh these words these are used in similar ways this automatic learning and unsupervised representation learning allows us to actually say given a word how is it used and we can actually turn that into a vector we can then feed that into a deep belief Network and do named entity recognition work by Cola bear and a lot of the people at Stanford Richard so chair actually has shown that we can actually get Rick record-breaking NLP tasks you know like sentiment analysis for example if you've seen that you know again like over 90 percent accuracy easily that those are traditionally really very hard problems that we have to take a lot of time to figure out what the features are for deep learning handles a lot of that for us yeah so that's some of the theory in the fundamental of what we're dealing with and so you have really to a couple levels here but one he talked about the fundamentals are bm's and layers build up to deep belief networks but how do you really execute that on I do so that's what I want to talk a little bit on the next section and and this builds on some of the the prior art that I've worked on and then I learned from especially things like Valpo Abbott and some work out of Google like downpour SGD Jeff Dean's work obviously but previously I've done some work with parallel linear regression kind of start out with logistic regression which is general general linear modeling and paralyzing that with parameter averaging and that's a trick that I learned from like Alex mola had at Google and then it's used throughout of parallel SGD work with Jeff Dean's work and then I did an implementation of just stock neural networks and then Adam kind of dropped out of sky and you said you're doing all your math very poorly we should do it vectorized and it's insanely faster and I was calling okay okay and then showed me and I was like okay I'm wrong I can admit it on stage so in then I packaged it in met room and then this has been my developmental parallel iterative algorithm and that's where the current we took the core of DL 4j the concepts the vectorized core engine and then we've wrapped it in the iterative reduce framework to scale out with lots of workers doing part shards of the data I want to do and so this parameter averaging is kind of like the trick that just keeps on giving and machine learning if you want to do linear modeling you know just lots of workers on the shards average the parameters every so often and eventually you get a really good answer and you can take a look at McDonald 2010 Langford John Langford incredibly well published doctor in the space and he's done a ton with Falco Evan babbles excessively is incredibly screamingly fast and the Jeff Dean's work is the guy I just chased all his papers down poor SGD is a really interesting paper and that's that's dumped the alto down poor STD paper that's where I get the bulk of my ideas for iterative produce the parallel frame work on yarn and then the parallel iterative algorithms on top of that so what does that look like today we're we're over the past couple years we who used to do before 2012 yeah so this MapReduce that was your yeah is your bell cow you know and you just shoehorn things in a MapReduce and then we and I have this kind of theory I know I have this theory and it says that is the amount of data in your organization approaches 100 percent in Hadoop in hdfs the amount of in-place analytics will approach a hundred percent in Hadoop as well and yarn yarn has driven that has all that wonderful work to support into that and I call this by the way Patterson's law because Patterson's theorem does not sound nearly as good anyway MapReduce looked about like this and parallel iterative looks about like this the trick is you you know with MapReduce it looks they can look very similar every so often you you hit a boundary condition and a BSP style processing graph you really want those workers to stay alive because the MapReduce setup costs are like 30 seconds well if you do a thousand passes over data set it's like thirty thousand second switch ends up being what like it ends up being like eight hours I think or something like that I've just set up costs and you just really want to get those passes over so if your workers stay resident like they didn't introduce and yarn you can change the mechanics of how your parallel form work goes and you can see more about that in some of my previous talks and so this is what ends up looking like a serial online algorithm has trained alex at example updates the model and then looks at the next example and here you want each workers on top of shorter the data and have its own copy of the model and every so often you want to have a good average it together with parameter averaging a global model those splits line up well to HDFS blocks right move the worker on to the block like a map task but it's not a map tasks specialized iterative reduce work on your n– there you have it yarn is important them and I think there's a lot of like distributed systems talk out there these days and you arm saw some problems its complex to write to and that's why we wrote a layer under our algorithms called iterative reduce I think sometimes people think we're gonna use yarn well typically you're gonna use a fair scheduler to manage how apps use resources but when you have something you have iOS isolation between applications and everybody can share that cluster nicely you know that helps in being a first-class yarn citizen is important so you can mix it in with a with a test job or MapReduce job or Impala job whatever your distro is and and and make workflows out of it so being on top of yarn is important here and they want these these tools to share it resources nicely so like Adam talked before and we talked about there's there's unsupervised component to this and there's a supervised component to this and the two phases are called pre trained and literature and then in tune pre-trained is when we take on super late data and we run it through those RBMS with no labels and they just reconstruct the data over and over again through each layer feeding up and then they have they have a basically overall general structure of the data and then we give it a we give it some some label data and we say you have ideas now we're going to tell you what those ideas are and we're going to fine tune the models so that whenever you see this idea you slap this label on it and it's interesting it's an interesting combination of unsupervised and supervised learning and each phase can do we might want to do like 10 passes over the data set of pre-trained and then only one pass a fine-tune or maybe vice versa and that's good figure Bowl in the tools and every so often we just in entire we average the entire network graph at the master so we send the preacher RBMS and then the whole feed-forward Network and the logistic regression output layer all of the master average them all together send the global message back network back all the workers and they continue on whatever pass and pretty trying to fine-tune they're in yeah so something we were experimenting with is I think that I feel my gut says that Hadoop having all ton of data you're ingesting with like flume and Kafka and things like that I'm keep thinking I'm like this tool is gonna be interesting filling a lot more pre-trained time at this data because I don't want to label at all and if I can get away with labeling just a little bit I can data mine a whole lot more data and get much better models and so that that's an area that we're exploring further as we work so I'm gonna let Adam talk about results we found out so so as I mentioned before you know deep learning actually has done record-breaking things but what about you know what have we done ourselves though right so you want to see you know you might want to see something from us so from results you know for iterative reduce one thing I've done which which was fun I actually set up a 28 node cluster on Amazon all seats oh I think all I believe 30 core machines and Iran actually all of em missed 10 minutes Hinton hours the difference is not only that much faster training also built-in parameter averaging for the results gives you automatic regularization so what is regularization just for those of you who aren't familiar with it the general idea is that a neural net learns features and it over fix what you do to counterbalance that and so it can the model can generalize better is this idea of regularization regularization happens automatically via parameter averaging so what what else we've done is we've also added this idea of at ogród adaptive learning rates feature-wise learning rates so it says if this feature learns to fast slow it down anta Grodd is a way of saying slow down the learning speed up the learning depending on the parameter that you know is in play so we're still experimenting with whether we want to you know average at ogród or what you know what we want to do the key point here is it's adaptive and it's more robust so what are some of the scale-up metrics we need to play with so if we think about Hadoop if we think about any distribute system you know we need to think about what is our batch size what are you know the int you know in in any intro to Hadoop you'll hit you'll hear about input splits and all sorts of random optimization metrics so what we need to think about is how many how many records do we want to average on so the smaller the usually the idea is that the smaller the batch size usually 10 to 100 depending on your data set you're gonna want you you usually want it smaller if possible the idea to counterbalance this is to say if I have a hundred worker you know if I have a fight if I train one hundred at a time it'll train faster if I train on a smaller amount it might actually might be more accurate it really depends the other thing we have to think about here is message passing so you know this is just you know this is just saying train on this bit of data are trading on this bit of data in a distributed system this gets chaotic adds a lot of overhead so we need we need to counterbalance that and think about when we're deploying this what do we want to train on this is all configurable via via properties files it's it's all right in there you know you you know you call the orange our it figures out Oh here's the here's here's my configuration use that to tune what you're doing so you know if we get into the command line here so all you need to do is literally just think about think of that whole how you normally run a Hadoop app Java jar my gigantic jar file and my configuration from there all you need to do is worry about training your model and all the other fun stuff that goes with that so what are some of the things we've seen so this is an example of handwriting renders so handwriting is from M missus I had mentioned earlier this is actually each neuron being rendered by a deep belief network or by a restrictive Boltzmann machine the idea is that it learns how to each neuron actually learns how to draw two a three and nine a seven and then from there it uses he uses the images themselves and it's internal representation as features to learn how how do I recognize what a one is and it does this again by repeatedly sampling the data and again we've had record-breaking results on M nest so it's a very minimum you know the the papers it works as crazy as it sounds it you know not human done features what's going on here it works so yeah I don't know how many of you guys have heard of deep face by chance this is the deep face dataset labeled faces in the wild again same results so why am i showing you face renders don't and and and all this other stuff over and over again the reason here is to visualize how a neural network learns a neural network learns by learning how to redraw the thing the things you show it these reconstructions will be different for different datasets you show it you can actually also use us in debugging the general idea is that the cloudier it is the worst it is or maybe it's just too early in the training there's all sorts of techniques you can use the idea is to understand is the network learning what I want it to we can also do this with words and all other sorts of things but I wouldn't recommend the renders unless you were doing computer vision this is again just to visualize a concept so what did deep face do deep face actually beat human accuracy on labeled faces 98.5% see the last word the last result was 90 like around ninety seven point five it actually is marginally better than humans so again we've seen this with text audio anything you can think of no just so you're not you're not asleep here what about cat photos the internet loves cats so what if we you know if you guys remember Andrew Ames Research he actually taught a gigantic neural network to learn what a cat is so it actually output its general representation of a cat again same idea we showed it some cats it's like recognize all these different kinds of cats now we actually can build it classifier you know so the general idea to in closing here is that you know we have we have this you know these three you know three or four layers stack guardians they learn features and then relative to your output layer they do some tasks regression so you can predict all the things you could do in neural networks can now be done in a more automatic fashion so what are some future directions what what else could we do so Amazon actually allows you to do GPU instances now so we want you know the researchers are harping on GPUs GPUs so what if you know so one thing we want to do is we want to actually add GPU processing to this the other thing is to make this a little more turnkey vectorization tooling if we remember there's all sorts of vectorization things that we need to do in order to feed things into a neural network a neural network speeds speaks matrices when they're you know all sorts of machine learning algorithms speak matrices so better victor's ation tools for different you know different kinds of data like sound text images whatever you might want to do video we want to vectorize those in appropriate ways so better tooling and also yarn so one one one key thing here is that yarn currently uses a mahout matrices which java matrices are slow we want native luckily this is changing well so what you know one of my constraints with hadoop was well why do we have java matrices now we're gonna be moving over to j blas JBoss is basically java with matrices with native matrices so you get all the nice little operations matrices you know like in the Java language without worrying about oh this is native or what's going on here you can say install JBoss you're done so just to just some references you know obviously we're standing on the shoulders of giants that's how you for the research so so one thing is fast learning algorithm for deep belief Nats the algorithm the original pre-trained part the auto learning came from hidden in 2006 that kind of spurred the revolution into what we see now large-scale distributed deep neural networks this was Dean and also debugging so we mentioned neural nets are a black box they can be hard to debug there are luckily all sorts of techniques and visualizations histograms plots that we can actually do to actually show what is going on in the learning and allow us to make smart decisions to the typical workflow with this is you run in there you run an iteration you train in on a bit of data and then you see you see how it reacts and then you can usually within one or two iterations figure out what it's doing without a lot of oh let me just wait eight hours and then see if the results are good you can actually react accordingly questions parts of them parts of it parts of it so in this is what a million four parameters something like that yeah okay I'm sorry how many how many parameters are where they want so typically I think if you look at the Google research they get up into the hundreds millions and billions of parameters so far the datasets that we have if we built the system or only in the millions but there's no reason you can't try bigger it's just like when you're debugging I don't want to wait for if you're only debugging them like one machining or three machines or in the simulator I don't want to wait for three days to figure out if whether or not my message passing was working so in the millions is where we're at right now but you know there are consume constraints once you go up into the hundreds of moons it ends up being you need a parameter server to start begin charting the parameters out so if you look in his research they use a parameter server which Google recently open sourced we're just not there yet to need it but yeah it's still modeled left or the some of the smaller Prem but in the millions of parameters not a lot of people have needs for millions of parameters today but the scale the scale architecture and the lineage is is there to go up yeah so it's it's literally iterative reduce yes mmm yeah I used ooh keeper for the configuration and Hazel cast for the distributed lock and everything that goes into that so so to be clear I want to make some there's there's a deal for J like core like math libraries and then we've wrapped that in different execution places so you can run in your laptop you can run a naka you can run a yarn but each one has like different like it's run differently but it's ported to different places today there's no parameter service for yarn because we're just we haven't we don't have you know 200 million parameters in a network yet it's just not so the aqua version I would say isn't as robust I mean Hadoop was built by people way smarter than us probably the sum of people in this room so I would counter that you know the aqua vision is great and is more meant for self-contained deployments in smaller amounts of data it can scale and I have scaled it out to like I said you know 20 nodes on Amazon and all those sorts of crazy stuff like that but if you want you know this is a distributed runtime that I mainly wrote just to say this is self-contained you can run it it's pretty it's a lot easier to set up and I would say Hadoop is yeah so it's like it's kind of like it's like a core engine in a different deployment mode so if you want to just run a quick distributed system that would be a great way if you have a Hadoop cluster and you want to run deep learning on your data in place you run the yarn version but it's the same mathematical model either way yes so I actually have a tiff if you look at the source code there's actually a deep autoencoder in there I do semantic hashing and I have had good results with it both with you know word defect vectors so I've done document search I've also done computer vision with it did you sorry could you repeat it dozen oh yes so besides the filters we also have histograms for the weights as well as plotting activations and all you do is debug it via literally colors you know how dark is it you'll you know you can see like literally over time how well it trains yeah I mean I actually had those those renders but unless you're like a deep learning person like showing people activation grids his own screen it's kind of like crickets chirp so like we can show them – I have my laptop also the reconstructions as you watch the number one start out was this fuzz and then it just like this begins to reconstruct one we have those two and this you can run a unit test and generate them so it's the codes in there to execute it's built in do you have ch-46 just build the yarn jars and use like I said use like I what I would do and there's I have a demo data set in it so instead of you said it using a big cluster just take like two labels of them nest so it's only like a one in the zero and just like 20 samples and run it and just watch it learn and go okay a or B you know is it's a one or a zero based off a whole like 20 images and that's a great way just to say oh this works and that's built into the unit test you can run that simulated run too and we can help you get that running you know the instructions on the on the wiki anybody else lots of those come see me after yeah we do planning for days got a really great wiki that just goes into a lot of literature and then a lot of plain-spoken kind of this is where the code and this is kind of the history behind it and explains the different chunks of it if you want to go deep or if you don't you just want to run in the command line that's one thing you can also have a lot of literature and a lot of links in there for you to understand what's going on if you have a couple spare marks Leonard's did take a read stuff oh so we actually have that so that's the thing I actually in deporting forage I have recursive neural tenser networks the soldier Nets as well as the convolutional multi-layer Network and convolutional restrictive Boltzmann machines so the last one I need to actually make is a recurrent net but right now I'm working on we're finding those right now yeah there's there's a lot of exotic networks out there at this point we're in the special snowflake field of neural networks they all get these really exotic kind of properties to them and there's a lot to me so so Ming's question is how do you you know typically like if you have we'll just a correct Gresham model you can go look at where the weights were and go how you like age and square footage of the house correlate really well duh sale price right and you can explain that model will with a million parameters it's just in a nonlinear you know function you can't you can't make any sense of it so how do you go to like actuaries and the insurance base can go hey this is what this model means you don't I think what is it Leo Brian's paper where he talks about there's two two worlds and then one world you just have to build really good models that that have low error rates then you go it works it's a black box and the insurance industry will decide you know it's different types of industries will be much slower to adopt them and and that's a tough part about using this because they want something they explain if you go it's just magic you know that's it usually doesn't fly in certain industries so there's some there's some realistic places where you can you can and cannot apply these things that so we've used labeled faces in the wild I've actually done sentiment analysis with tweets I've done a lot with deep autoencoders and actually compressing just cat you know like you know I just did the cat faces for example I have a lot of custom proprietary customer data sets as well that I've done fraud detection with I've done all sorts of regression I've actually done regression and prediction with them so it we've been using these for a while so yeah we've worked in this system for six months building components you need you need testing at every level to get it all to work to go another one that I use I use a lot of UCI datasets just because they're well-known one of them is like cough type that's something I got from psych Sean Owen who's well known in the space and he said yeah that's the data set that I use so it gets like amazing percentage on cough type CoV typ and you see is a repository and we've actually and obviously like we you know we've done a lot of like you know IRS we're actually gonna be working with the timet data set soon so you know audio yeah audio I think is interesting because the belief networks are hitting pushing that high score up farther and then that I you know Hadoop I've been doing Hadoop since 2009 and it's been very log based ETL based and then we begin the transition into different types of workloads and I think you're gonna see over the next couple years it's gonna ingest audio video and thing in general sensor and I used to do general sensor on it and so audio and video that deep belief networks really work well on those and that's that's an interesting kind of futurist trend so we're trying to meet that trend a bit but it works great on text too Apache Apache both of our Pachi Pachi yep github Apache there's nothing we're not so I'm you know it's right there take it do what you want you know you want you want support come come get a business card we'll help you out you know for hourly rate but you know it's classic kind of getting close to time what else we got thanks guys

Leave a Reply

Your email address will not be published. Required fields are marked *