Due to planned rail strikes, this event has been postponed. A new date will be announced soon.
Foundation models are an important emerging class of artificial intelligence (AI) systems, characterised by the use of very large machine learning models, trained with extremely large and broad data sets, requiring considerable compute resources during training. Large language models (LLMs) such as Open AI’s GPT-3 and Google’s LaMDA are the best-known examples of foundation models and have attracted considerable attention for their ability to generate realistic natural language text and engage in sustained coherent natural language dialogues.
They have also demonstrated limited capabilities in other classic AI domains, such as common-sense reasoning and problem-solving. A key bet with foundation models is that they acquire competence in a broad range of tasks, which can then be specialised with further training for specific applications. Foundation models are already finding innovative applications, such as GitHub’s CoPilot system, which can generate computer code from natural language descriptions (“a Python function to find all the prime numbers in a list”).
About the event
The Alan Turing Institute will host a one-day symposium to explore the state-of-the-art in foundation models – how they work, what they are and will be capable of, how they are being and will be used, and how to address the many challenges – both technical and ethical – that they raise.
During the event you will hear from key researchers in foundation models. Topics of interest include but are not restricted to:
- Machine learning pipeline/software architectures: How should data and system components be organised for optimally training foundation models?
- Machine learning systems for foundation models: Training state-of-the art foundation models requires sophisticated (and expensive) computational systems to support them during training, which limits who can develop and deploy them.
- Training data and the problem of toxicity: A widespread concern about foundation models is that they are often trained on data scraped from the world-wide web, which inevitably contains toxic/biased content. What issues does this raise, and how can we address them? What alternative avenues are available.
- Benchmarks: While foundation models have proved to have some impressive capabilities, the extent and limits of these capabilities is not well-understood. How can we systematically benchmark the capabilities of different systems without falling prey to the standard problem of researchers simply optimising system design to a narrow benchmark?
- Applications and future directions: What are the most promising applications of foundation models? Where will we see them deployed soon? What are the main risks in building applications based on such models, and how do we ameliorate them? Beyond language models, what should we expect to see in the next few years, perhaps building on multi-modal approaches like CLIP or Imagen.
A key outcome for this event will be to meet researchers and academics in foundation models and to build a network. There will be networking opportunities throughout the day, including a networking reception at the end of the event. We will be creating a shared mailing with the hope that these will facilitate future collaborations.
A full agenda will be available shortly.
We welcome participation from all backgrounds, but this event will be of particular importance to researchers, practitioners, and policy makers with an interest in foundation models. Within the Turing, we aspire to establish a substantial body of work around foundation models in the years ahead: if you have an interest in participating in this work, then we urge you to attend.
An access fund is available for anyone who would otherwise be unable to attend.