Many analyses in data science are not one-off projects, but are repeated over multiple data samples, such as once per month, once per quarter, and so on. For example, if a data scientist performs an analysis in 2017 that saves a significant amount of money, then she will likely to be asked to perform the same analysis on data from 2018. But more data analyses means more effort spent in data wrangling. We introduce the data diff problem, which attempts to turn this problem into an opportunity. Comparing the repeated data samples against each other, inconsistencies may be indicative of underlying issues in data quality. By analogy to text diff, the data diff problem is to find a “patch”, that is, transformation in a specified domain-specific language, that transforms the data samples so that they are identically distributed. We present a prototype tool for data diff that formalizes the problem as a bipartite matching problem, calibrating its parameters using a bootstrap procedure. The tool is evaluated quantitatively and through a case study on an open government data set.

Citation information

Sutton, C, Hobson, T, Geddes, J & Caruana, R 2018, Data Diff: Interpretable, Executable Summaries of Changes in Distributions for Data Wrangling. in Knowledge Discovery and Data Mining Conference 2018. Knowledge Discovery and Data Mining Conference 2018, London, United Kingdom, 19/08/18.

Turing affiliated authors

Research areas