What is IOAI
Format of the olympiad
The International Olympiad in Artificial Intelligence consists of two rounds: a scientific round and a practical round.
In both rounds, the aim of the solutions is not necessarily to reach “the correct answer” as there may not be one. The scientific round is metric-oriented, as solutions are scored based on performance on a pre-defined task-specific metric. The practical round is conclusion-oriented, as participants have to design and perform experiments and to draw conclusions about the capabilities and limitations of AI.
In this round, the participating teams are given problems that mimic real-world scientific research and the process of identifying and addressing limitations in an existing approach. Good performance in this round depends on basic coding skills, familiarity with common deep learning Python libraries, and an understanding of the fundamentals of machine learning.
The teams receive 3 problems based on recent AI research 6 weeks in advance of the IOAI, and work on them on their own schedule. At the end of the allotted time, the teams submit their solutions to all 3 problems in the form of working code and model outputs.
At the IOAI, the teams receive a set of 3 new problems to solve, which build on the 3 problems they worked on at home, i.e. the general setting remains the same in terms of AI task, data type, and model architecture, but the teams have to solve a new challenge within this setting.
The problems in the scientific round will be distributed as Google Colab notebooks and solutions will be submitted as a modified version of the same notebook. Participants are required to use Python for their solutions and to ensure that upon submission their notebooks are fully executable within the Colab environment. Further instructions will be given for specific problems regarding the maximum time a notebook should take to execute, the restrictions on the use of pre-trained models, of external data, etc.
The deliverables for each problem will be clearly stated in the problem description and may include: a score measured on a specific data split, a short written answer or methodological report, a plot visualizing some statistics or results, and others. Each problem will specify how points are distributed between the different deliverables.
The final scores for this round are based in small part on the performance of the solutions developed at home, and in large part on the performance of the solutions developed on site. Exact scoring details will be provided upon distribution of the first set of problems.
This round happens entirely on site at the IOAI and is intended to acquaint students with the workings of widely used AI software like ChatGPT, Dalle-2 and others. The problems require teams to inspect, analyze, and explain scientific questions pertaining to the behavior of working AI software.
Teams are given several problems to work on in a time window of 2 to 4 hours, with access to one computer connected to the internet per team and no other devices. They interact with the AI software through a GUI, the way a regular user would, therefore coding is not required in this round.
The answers submitted at the end of the allotted time are evaluated by the Jury according to a problem-specific criteria that may be based on the metric score a team achieved, on the number of valid solutions they found, on the ingenuity of their solutions, on the robustness of their solution, etc. The way points are allocated will be specified in each problem's description.