How to ML – Data

So we’ve decided what metrics we want to track for our machine learning project. Because ML needs data, we need to get it.

In some cases we get lucky and we already have it. Maybe we want to predict the failure of pieces of equipment in a factory. There are already lots of sensors measuring the performance of the equipment and there are service logs saying what was replaced for each equipment. In theory, all we need is a bit of a big data processing pipeline, say with Apache Spark, and we can get the data in the form of (input, output) pairs that can be fed into a machine learning classifiers that predicts if an equipment will fail based on the last 10 values measures from its sensors. In practice, we’ll find that sensors of the same time that come from different manufacturers have different ranges of possible values, so they will all have to be normalized. Or that the service logs are filled out differently by different people, so that will have to be standardized as well. Or worse, the sensor data is good, but it’s kept only for 1 month to save on storage costs so we have to fix that and wait a couple of months for more training data to accumulate.

The next best case is that we don’t have the data, but we can get it somehow. Maybe there are already datasets on the internet that we can download for free. This is the case for most face recognition applications: there are plenty of annotated face datasets out there, with various licenses. In some cases the dataset must be bought, for example, if we want to start a new ad network, there are plenty of datasets available online of personal data about everyone, which can be used then to predict the likelihood of clicking on an ad. That’s the business model of many startups…

The worst case is that we don’t have data and we can’t find it out there. Maybe it’s because we have a very specific niche, such as we want to find defects in the manufacturing process of our specific widgets, so we can’t use random images from the internet to learn this. Or maybe we want to do something that is really new (or very valuable), in which case we will have to gather the data ourselves. If we want to solve something in the physical world, that will mean installing sensors to gather data. After we get the raw data, such as images of our widgets coming of the production line, we will have to annotate those images. This means getting them in front of humans who know how to tell if a widget is good or defective. There needs to be a Q&A process in this, because even humans have an error rate, so each image will have to be labeled by at least three humans. We need several thousand samples, so this will take some time to set up, even if we can use crowdsourcing websites such as AWS Mechanical Turk to distribute the tasks to many workers across the world.

All this is done, we finally have data. Time to start doing the actual ML…

How to ML – Metrics

We saw that machine learning algorithms process large amounts of data to find patterns. But how exactly do they do that?

The first step in a machine learning project is establishing metrics. What exactly do we want to do and how do we know we’re doing it well?

Are we trying to predict a number? How much will Bitcoin cost next year? That’s a regression problem. Are we trying to predict who will win the election? That’s a binary classification problem (at least in the USA). Are we trying to recognize objects in an image? That’s a multi class classification problem.

Another question that has to be answered is what kind of mistakes are worse. Machine learning is not all knowing, so it will make mistakes, but there are trade-offs to be made. Maybe we are building a system to find tumors in X-rays: in that case it might be better that we call wolf too often and have false positives, rather than missing out on a tumor. Or maybe it’s the opposite: we are trying to implement a facial recognition system. If the system recognizes a burglar incorrectly, then the wrong person will get sent to jail, which is a very bad consequence for a mistake made by “THE algorithm”.

These are not just theoretical concerns, but they actually matter a lot in building machine learning systems. Because of this, many ML projects are human-in-the-loop, meaning the model doesn’t decide by itself what to do, it merely makes a suggestion which a human will then confirm. In many cases, that is valuable enough, because it makes the human much more efficient. For example, the security guard doesn’t have to look at 20 screens at once, but can only look at the footage that was flagged as anomalous.

Tomorrow we’ll look at the next step: gathering the data.

What is ML? part 3

Yesterday we saw that machine learning is behind some successful products and it does have the potential to bring many more changes to our life.

So what is it?

Well, the textbook definition is that it’s the building of algorithms that can perform tasks they were not explicitly programmed to do. In practice, this means that we have algorithms that analyze large quantities of data to learn some patterns in the data, which can then be used to make predictions about new data points.

This is in contrast with the classical way of programming computers, where a programmer would use either their domain knowledge or they would analyze the data themselves and then write the program that has the correct output.

So one of the crucial distinctions is that in machine learning, the machine has to learn from the data. If a human being figures out the pattern and writes a regular expression to find addresses in text, that’s human learning, and we all go to school to do that.

Now does that mean that machine learning is a solution for everything? No. In some cases, it’s easier or cheaper to have a data analyst or a programmer find the pattern and code it up.

But there are plenty of cases where despite decades long efforts of big teams of researchers, humans haven’t been able to find an explicit pattern. The simplest example of this would be recognizing dogs in pictures. 99.99% of humans over the age of 5 have no problem recognizing a dog, whether a puppy, a golden retriever or a Saint Bernard, but they have zero insight into how they do it, what makes a bunch of pixels on the screen a dog and not a cat. And this is where machine learning shines: you give it a lot of photos (several thousands at least), pair each photo with a label of what it contains and the neural network will learn by itself what makes a dog a dog and not a cat.

Machine learning is just one tool that is available at our disposal, among many other tool. It’s a very powerful tool and it’s one that gets “sharpened” all the time, with lots of research being done all around the world to find better algorithms, to speed up their training and to make them more accurate.

Come back tomorrow to find out how the sausage is made, on a high level.

What is ML? part 2

Yesterday I wrote how AI made big promises in the past but it failed to deliver, but that now it’s different, because of machine learning.

What’s changed?

Well, now we have several products that work well with machine learning. My favorite example is Google Photos, Synology Moments and PhotoPrism. They are all photo management applications which use machine learning to automatically recognize all faces in pictures (easy, we had this for 15 years), recognize automatically which pictures are of the same person (hard, but doable by hand if you had too much time) and more than that, index photos by all kinds of objects that are found in them, so that you can search by what items appear in your photos (really hard, nobody had time to do that manually).

I have more than 10 years of photos uploaded to my Synology and one of my favorite party tricks when talking to someone is to whip out my phone and show them all the photos I have of them, since they were kids, or the last time that we met, or that funny thing that happened to them and I have photographic evidence of. Everyone is amazed by that (and some are horrified and deny that they looked like that when they were children). And there is not one, but at least three options to do this, one of which is open source, so that anyone can run in at home on their computer, for free, so there is demand for such a product.

Other successful examples are in the domain of recommender systems, YouTube being a good example. I have a love/hate relationship with it: on one hand, I wasted so many hours of my life to the recommendations it makes (which is proof of how good it is at making personalized suggestions), on the other hand, I found plenty of cool videos with it. This deep learning based recommender system is one of the factors behind the growth of the watch time on YouTube, which is basically the key metric behind revenue (more watch time, more ads).

These are just two examples that are available for everyone to use, and which serve as evidence that machine learning based AI now is not just hot air.

But I still haven’t answered the question what is ML… tomorrow, I promise.

What is ML?

Machine learning is everywhere these days. Mostly in newspapers, but it’s seeping into many real life, actual use cases. But what is it actually?

If you read only articles on TechCrunch, Forbes, Business Insider or even MIT Technology Review, you’d think it’s something that brings Model T800 to life soon, or that it will cure cancer and make radiologists useless, or that it will enable humans to upload their minds to the cloud and live forever, or that it will bring fully self driving cars by the end of the year (every year for the last 5 years).

Many companies want to get in on the ML bandwagon. It’s understandable: 1) that’s where the money is (some 10 billion dollars were invested in it in 2018) and 2) correctly done, applied to the right problems, ML can actually be really valuable, either by automating things that were previously done with manual labor or even by enabling things that were previously unfeasible.

But at the same time, a lot of ML projects make unrealistic promises, eat a lot of money and then deliver something that doesn’t work well enough to have a positive ROI. The ML engineers and researchers are happy, they got payed, analyzed the data and played around with building ML models, and maybe even published a paper or two. But the business is not happy, because they are not better off in any way.

This is not a new phenomenon. Artificial Intelligence, of which Machine Learning is a subdomain of, has been plagued by similar bubbles ever since it was founded. AI has gone through several AI winters already, in the 60s, 80s and late 90s. Big promises, few results.

To paraphrase Battlestar Galactica, “All this has happened before, all this will happen again but this time it’s different”. But why is it different? More about that tomorrow.