A common problem arising in statistical inference is the need to unify distributed analyses and inferences on shared parameters from multiple sources, into a single coherent inference. This unification (or 'fusion') problem can arise either explicitly due to the nature of a particular application, or artificially as a consequence of the approach a practitioner takes to tackle an application. This project will consider the particular application where each inference belongs to a separate party and each party wishes to retain confidentiality of their underlying data with guarantees.
- Research into the statistical use of cryptographic techniques for the purposes of inferential privacy
- Development of 'fusion' theory to understand (and mitigate) issues including scaling with dimensionality and the number of parties unifying, and to understand what would make good approximate methods
- Development of methodology to broaden applicability of work to common statistical settings (such as Bayesian inference)
- Attempt to build industrial relations to popularise work
Aside from direct applications in data privacy settings, there are specific examples of this 'fusion' problem arising naturally in an application include. One example is where an expert elicitation, in which the (distributional) views of multiple experts on a topic (or set of parameters) have to be pooled into a single view before a decision maker can make an informed decision.
Another example is multiview learning and meta analysis, in which an interpretation could be that we are synthesising multiple inferences on a particular parameter set (computed on datasets which may or may not be of the same type), but the underlying raw data is not directly available for the unified inference. The problem also arises through construction in settings including tempering methodology within statistics, and in distributed big data settings.
Acceptance of initial "Monte Carlo Fusion" paper (Journal of Applied Probabilitiy, January 2019).