## Is overfitting… good?

** Published:**

Conventional wisdom in Data Science/Statistical Learning tells us that when we try to fit a model that is able to learn from our data and generalize what it learned to unseen data, we must keep in mind the bias/variance trade-off. This means that as we increase the complexity of our models (let us say, the number of learnable parameters), it is more likely that they will just *memorize* the data and will not be able to generalize well to unseen data. On the other hand, if we keep the complexity low, our models will not be able to *learn* too much from our data and will not do well either. We are told to find the *sweet spot* in the middle. But is this paradigm about to change? In this article we review new developments that suggest that this might be the case.