|Tue 7||Wed 8||Thu 9||Fri 10||Sat 11|
|09:00-11:00||Lecture 1||Lecture 3||Lecture 5||Lecture 6||Workshop|
|11:20-12:20||Lecture 1||Lecture 3||Lecture 5||Lecture 6||Workshop|
|13:30-15:30||Lecture 2||Lecture 4||Workshop||Workshop||Social event|
|15:50-17:00||Lecture 2||Lecture 4||Hike to Fløyen||Workshop||Social event|
|17:00-19:00||Walk and talk||Social dinner|
Lecture 1 – Introduction to Knowledge Graphs
Knowledge Graphs have received growing attention in recent years, particularly in scenarios that involve integrating diverse sources of data at large scale. Within such scenarios, Knowledge Graphs have popularised the idea of modelling data following a graph-based abstraction, where nodes represent entities and edges represent the relations between entities. In terms of research, Knowledge Graphs have become a novel point of convergence for different communities, wherein a variety of techniques for creating, enriching, validating and analysing Knowledge Graphs have been proposed, alongside techniques for querying, reasoning, and generating machine learning models over them. In terms of practice, Knowledge Graphs are now used in diverse applications involving question answering, recommendations, classification and prediction, semantic search, information extraction, and more besides. In this lecture, we will provide an introduction to Knowledge Graphs, covering the basics of how they modelled, the techniques that they enable, the research questions that they raise, and the applications in which they have been used.
Aidan Hogan is an Associate Professor at the Department of Computer Science, University of Chile, and an Associate Researcher at the Millennium Institute for Foundational Research on Data (IMFD). His research interests relate primarily to the Semantic Web, Databases and Information Extraction; he has published over one hundred peer-reviewed works on these topics. He has been invited as a lecturer to seven summer schools and he has co-organised three summer schools. He is an author or lead author of three books, the latest of which, entitled “Knowledge Graphs”, is due to be published with Morgan & Claypool; a manuscript of this book is available from arXiv (https://arxiv.org/abs/2003.02320). For further information, see his homepage (http://aidanhogan.com/).
Lecture 2 – Reasoning in Knowledge Graphs
Ricardo Guimarães, Ana Ozaki
Knowledge Graphs (KGs) are becoming increasingly popular in the industry and academia. They can be represented as labelled graphs conveying structured knowledge in a domain of interest, where nodes and edges are enriched with metaknowledge such as time validity, provenance, language, among others. Once the data is structured as a labelled graph one can apply reasoning techniques to extract relevant and insightful information. We provide an overview of deductive, inductive and abductive reasoning approaches for reasoning in KGs.
Ricardo Guimarães is a postdoctoral research fellow at the University of Bergen. He works in Artificial Intelligence (AI), specifically with Knowledge Representation and Reasoning (KR), focusing on Description Logics. Currently, he is working towards the combination of Knowledge Representation approaches with Machine Learning methods, focussing on Ontologies and Knowledge Graphs.
Ana Ozaki is an associate professor at the University of Bergen, Norway. She is an AI researcher in the field of knowledge representation and reasoning and in learning theory. Ozaki is interested in the formalisation of the learning phenomenon so that questions involving learnability, complexity, and reducibility can be systematically investigated and understood. Her research focuses on learning logical theories formulated in description logic and related formalisms for knowledge representation. She is the principal investigator of the project Learning Description Logic Ontologies funded by RCN.
Lecture 3 – Knowledge Graph Embeddings
Steven Schockeart, Víctor Gutiérrez-Basulto
Title: Reasoning about Concepts with Ontologies and Vector Space Embeddings
Ontologies and vector space embeddings are among the most popular frameworks for encoding conceptual knowledge. Ontologies excel at capturing the logical dependencies between concepts in a precise and clearly defined way. Vector space embeddings excel at modelling similarity and analogy. Given these complementary strengths, various research lines have focused on developing frameworks that can combine the best of both worlds. In this chapter, we present an overview of the work in this area. We first discuss the theory of conceptual spaces, which was proposed in the 1990s by Gärdenfors as an intermediate representation layer, in between embeddings and symbolic knowledge bases. Second, we discuss approaches where symbolic knowledge is modelled in terms of geometric constraints, which are used to constrain or regularise vector space embeddings. Finally, we discuss methods in which similarity, and other forms of conceptual relatedness, are derived from vector space embeddings and subsequently used to support flexible forms of reasoning with ontologies.
Lecture 4 – Rule Mining and Link Prediction
Fabian M. Suchanek
Lecture 5 — Learning and Reasoning with Graph Data: Neural and
Manfred Jaeger studied mathematics in Freiburg, Germany, where he obtained his diploma in mathematics in 1991. He subsequently went to the Max-Planck-Institute for Computer Science in Saarbrücken, where he obtained a PhD in Computer Science from Saarland University in 1995. From 1996-2003 he continued as research associate at the Max-Planck-Institute for Computer Science, and in that period also spent time as postdoctoral researcher at Stanford University, the University of Helsinki, and Freiburg University. In 2002 he obtained the Habilitation in computer science from Saarland University. Since 2003 he is associate professor at the Computer Science department at Aalborg University, Denmark.
Manfred Jaeger has served as associate editor for the Journal of Artificial Intelligence Research and the Artificial Intelligence Journal. He is currently a member of the editorial board of Machine Learning.
Lecture 6 – Automating Moral Reasoning