What is AI?
Modern AI has its roots in the 1950s, but it builds on an age-old dream of understanding human thinking that dates back to antiquity.
Defining AI precisely is notoriously hard, as our expectations on what behaviour we consider “intelligent” in machines keep shifting – one hundred years ago that would have included pocket calculators and programmable alarm clocks. Asking a handheld device to recommend songs using spoken language would have been considered almost impossible even a couple of decades ago.
Along general lines, however, AI research tries to emulate aspects of intelligence by developing methods inspired by our intuitions and observations of human, and, in some cases, animal behaviour. Modern-day AI is both about understanding intelligence and engineering solutions, many of which have found their way into everyday use.
Much of the current interest in AI has been fuelled by the success of Machine Learning, an area that develops algorithms to detect patterns and extract knowledge and insights from data for analysis and prediction purposes.
Driven by the growth in available data and increasing compute power, machine learning has found their way into different types of applications. Widely deployed, everyday life applications are largely to be found in natural language processing (e.g. voice-controlled assistants), image processing (e.g. tracking objects in photos and videos), and robotics and autonomous/semi-autonomous systems (e.g. autonomous vehicles, manufacturing). AI often operates behind the scenes, e.g. driving personalisation in recommendation systems, internet search, social media, video games, or business intelligence systems – where it is used to perform very specific, narrow functions.
The emergence of new Generative AI (GenAI) systems in the early 2020s marked a step change in AI capabilities: For the first time, these broad AI systems were able to generate content (e.g. text, images, audio, video, computer code) in response to human questions or instructions to perform a vast range of tasks. These systems use humongous models that can have more than a trillion internal parameters and have been trained on huge amounts of data to predict words, images etc, and their responses often create an impressive semblance of intelligence. These models, called large language models when they generate language (or foundation models to include those that generate other types of content) have no understanding of the world or what they say, and hence can produce inaccurate, biased, or inappropriate responses. Despite those limitations, they are widely expected to enable the development of many new applications, and we are only just beginning to understand their potential.
Throughout its history, AI has undergone several “hype cycles”, where impressive successes led to inflated expectations that were later followed by disillusionment. We are currently experiencing on such cycle due to the emergence of GenAI, and it is unclear what its impact will be in the longer term. Despite these changes in public opinion, continued research efforts often led to success in the long-term.
The University of Edinburgh has shaped the evolution of AI since the 1960s, and has exceptional strengths across all its subfields. We are committed to supporting the whole breadth of AI research, and advancing the state of the art by integrating different techniques.
This article was published on