Artificial Intelligence

Course taught within the Bachelor programme in Computer Science at Sapienza University of Rome.

Intelligenza Artificiale

Instructors

Course facts sheet

  • Sapienza course ID: 1022262
  • Number of credits: 6
  • Language of lectures: Italian

Objectives

General goals. The course aims at introducing students to a wide-spectrum presentation of Artificial Intelligence (AI).

Specific goals. The course aims at making students proficient in the theoretical comprehension of a wide set of AI techniques and in the practical use of such techniques within the design of intelligent software systems.

Knowledge and understanding. A wide-spectrum introduction to the foundational principles and the different branches of Artificial Intelligence (AI), knowledge about problem solving by searching, logic inference, planning, automated reasoning, learning.

Applying knowledge and understanding. The successful student will be able to exploit the portfolio of techniques and the different approaches shown in the course for the design and the successful implementation of intelligent software systems.

Critical and judgmental skills. Students will be able to take autonomous and rational decisions on the most effective AI techniques to employ in the design of intelligent software systems.

Communication skills. Students will be able to interact proficiently with other AI researchers on a wide set of AI topics.

Learning capabilities. Students will be able to extend their skills in the subjects of this course, by the autonomous reading of the scientific literature on AI.

Programme

Part A

  • Introduction to Artificial Intelligence (AI), high-level survey of its main branches, historical aspects.
  • Intelligent agents, rationality, different types of agents and of task environments.
  • Uninformed and informed search strategies for systematic (black-box) state space exploration: breadth-first, min-cost, depth-first, bounded depth, iterative deepening, best-first greedy, A*; heuristic functions: admissibility and consistency, strengths and limitations of systematic state space exploration.
  • Iterative improvement algorithms: local search (hill-climbing, steepest-ascent/descent, simulated annealing) and evolutionary search (genetic algorithms), strengths and limitations of iterative improvement algorithms.
  • Constraint satisfaction and optimisation problems: definition, variants, problem modelling, constraint propagation, local consistency notions (node-, arc-, generalised arc-, path-, K-consistency), global constraints, backtracking, conflict-driven backjumping, local search, strengths and limitations of constraint satisfaction and optimisation.
  • Knowledge and reasoning – agents based on propositional logic: modelling and inference (resolution, forward chaining for Horn knowledge bases), SAT for inference and combinatorial problem solving, DPLL, CDCL (glimpses), strengths and limitations of knowledge representation and reasoning approaches based on propositional logic.

Part B

  • Knowledge and reasoning – agents based on first-order logic: modelling and inference (propositionalisation, lifting, resolution, forward chaining), strengths and limitations of knowledge representation and reasoning approaches based on first-order logic.
  • Classical planning: modelling via PDDL, resolution via informed search and heuristics, reduction to SAT and to CSP, Situation Calculus, strengths and limitations of classical planning approaches.
  • Knowledge and reasoning under uncertainty: Bayesian network modelling and exact inference algorithms, strengths and limitations of Bayesian networks.
  • Principles of decision theory: utility functions, rational preferences, maximum expected utility, optimisation under multiple objectives.
  • Principles of machine learning: supervised, unsupervised and reinforcement learning, classification vs. regression, induction of decision trees by examples, performance evaluation (stability, bias, variance, error metrics and loss functions, cross-validation), model selection, k-fold cross-validation, ensemble learning (bagging, random forests, stacked generalisation, boosting), support vector machines (glimpses), neural networks (glimpses), parametric vs. non-parametric models, k-nearest neighbours, clustering (k-means and glimpses of other algorithms), strengths and limitations of machine learning approaches (under- and over-fitting, correlation vs. causality, explainability, adversarial examples).