Partner Event - Exploring foundation models

About this event 

The aim of this one-day event is to explore the state-of-the-art in foundation models – how they work, what they are and will be capable of, how they are being and will be used, and how to address the many challenges – both technical and ethical – that they raise.

Foundation models are an important emerging class of artificial intelligence (AI) systems, characterised by the use of very large machine learning models, trained with extremely large and broad data sets, requiring considerable compute resources during training. Large language models (LLMs) such as Open AI’s GPT-3 and Google’s LaMDA are the best-known examples of foundation models, and have attracted considerable attention for their ability to generate realistic natural language text, and engage in sustained coherent natural language dialogues.

They have also demonstrated limited capabilities in other classic AI domains, such as common-sense reasoning and problem-solving. A key bet with foundation models is that they acquire competence in a broad range of tasks, which can then be specialised with further training for specific applications. Foundation models are already finding innovative applications, such as GitHub’s CoPilot system, which can generate computer code from natural language descriptions (“a Python function to find all the prime numbers in a list”).

Registration will be opening soon.