Lightning talk (5 minutes)
Data Science
Machine Learning
Analysis

As data scientists we find ourselves looking at the metrics of our analyses and models trying to figure out what's going on. We get so locked in on improving metrics such as MSE that we neglect looking at actual results. Sometimes biases in results are so blatantly obvious that just by skimming over raw results they stand out like a sore thumb. In this talk I'll focus on such a case I encountered while developing a recommender system for breaking news stories. By using examples from this project and some back of the envelop probabilities, I'll argue that looking at results for certain problems has more descriptive value than many summary metrics. Hopefully I'll convince you to get to know your predictions better, and next time you're analysing results you'll start by looking at them.

Aviv Rotman