Adding Common Sense to Machine Learning with TensorFlow Lattice

by TAMAN NARAYAN & SEN ZHAO A data scientist is often in possession of domain knowledge which she cannot easily apply to the structure of the model. On the one hand, basic statistical models (e.g. linear regression, trees) can be too rigid in their functional forms. On the other hand, sophisticated machine learning models are flexible in their form but not easy to control. This blog post motivates this problem more fully, and discusses monotonic splines and lattices as a solution. While the discussion is about methods and applications, the blog also contains pointers to research papers and to the TensorFlow Lattice package that provides an implementation of these solutions.  Authors of this post are part of the team at Google that builds TensorFlow Lattice. Introduction Machine learning models often behave unpredictably, as data scientists would be the first to tell you. For example, consider the following simple example — fitting a two-dimensional function to predict if someone wi

Changing assignment weights with time-based confounders

by ALEXANDER WAKIM Ramp-up and multi-armed bandits (MAB) are common strategies in online controlled experiments (OCE). These strategies involve changing assignment weights during an experiment. However, if one changes assignment weights when there are time-based confounders, then ignoring this complexity can lead to biased inference in an OCE. In the case of MABs, ignoring this complexity can also lead to poor total reward,  making it counterproductive towards its intended purpose. In this post we discuss the problem, a solution, and practical considerations. Background Online controlled experiments An online controlled experiment (OCE) randomly assigns different versions of a website or app to different users in order to see which version causes more of some desired action. In this post, these “versions” are called arms and the desired action is called the reward (arms are often called “treatments” and reward is often called the “dependent variable” in other contexts). Exampl

Humans-in-the-loop forecasting: integrating data science and business planning

by THOMAS OLAVSON Thomas leads a team at Google called "Operations Data Science" that helps Google scale its infrastructure capacity optimally. ln this post he describes where and how having “humans in the loop” in forecasting makes sense, and reflects on past failures and successes that have led him to this perspective. Our team does a lot of forecasting. It also owns Google’s internal time series forecasting platform described in an earlier blog post . I am sometimes asked whether there should be any role at all for "humans-in-the-loop” in forecasting. For high stakes, strategic forecasts, my answer is: yes! But this doesn't have to be an either-or choice, as I explain below. Forecasting at the “push of a button”? In conferences and research publications, there is a lot of excitement these days about machine learning methods and forecast automation that can scale across many time series. My team and I are excited by this too (see [1] for reflections on