1/n First entry on The Epsilon, my new #machinelearning blog: ML for dummies.
A 🧵...
2/n My favorite definition of ML is Tom Mitchell's:
"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."
An example:
3/n The goal is to approximate a function. More formally:
4/n As you might have guessed, there are different types of task T, which would be handled differently.
Take the Parkinson's disease example above; it is called a classification problem, expressed in its simplest form: binary classification. A or B. Disease or Healthy.
5/n The goal is to train a model and make a prediction on unlabeled voice measurement data.
Once a prediction is made, we want to capture if we are doing well. For a binary classification problem, we can approach it this way:
6/n The expression above defines a cost function that takes two arguments: label and model output. The function returns 1 if label equals model output and returns 0 if label does not equal model output. We tabulate our results, and we have the model's accuracy.
7/n There are many more things to say even in a super short machine learning primer piece: model underfitting/overfitting, inductive bias in ML models, etc
See more in my blog in the next thread: