Knowledge Representation and Reasoning (KRR)

The fourth module of my MSc AI program at the University of Essex Online

Done

Module Units (Click to Read)

Module Overview

As per the course website, "During this module, we will cover the concepts and principles of KRR. We will review methods and frameworks as well as tools to enable you to engage with, experience and envision current as well as future developments in the area KRR as sub-discipline of AI. You will not only be equipped with theoretical and practical skills, but this module will also make you aware of the practical applications of KRR as a key part of AI solutions."

Topic Overview

Knowledge Representation and Reasoning (KRR) is a field concerned with how information about the world can be formally structured and logically manipulated. It explores methods for encoding facts, concepts, relationships, and rules in a way that allows for systematic inference, decision-making, and problem-solving. KRR draws from disciplines like logic, linguistics, and philosophy to build frameworks that support understanding, explanation, and prediction across diverse domains, from mathematics to ethics.

In artificial intelligence, KRR serves as the backbone for systems that require symbolic understanding and logical reasoning. It enables machines to represent complex knowledge, such as ontologies, causal relationships, and temporal events, and to draw conclusions through deductive, inductive, or abductive reasoning. This is especially vital in applications like expert systems, semantic web technologies, and explainable AI, where transparency and interpretability are key. Modern AI increasingly blends KRR with statistical learning, using hybrid models to combine the precision of logic with the adaptability of data-driven approaches.

My End-of-Module Reflection

Before the module

The Knowledge Reasoning and Representation (KRR) module was the one that I was most anticipating at the beginning of my master's programme. Among the other modules, this was the one that I clearly knew nothing about, so it got me curious. Furthermore, with a simple online search back then, I understood that KRR is a multidisciplinary domain that draws from philosophy and logic, and incorporates ideas from psychology to extend theories of how the human mind gains knowledge to machines (Schank and Abelson, 1977). So, driven by my curiosity and tendency for the fusion of domains, I was looking to get 12 weeks of mind-opening knowledge that connects my different interests and shows me things about artificial intelligence that I did not even know existed.

Part of these expectations was not met and another part was satisfied. For the part that was not, I did not find much multidisciplinary nature to all the discussions, readings, seminars and lecturecasts, nor even in my additional resources that I went through over the three months of study. Beyond only brief contexts relating parts of the domain to philosophy and logic, I did not come across anything that links the domain to the psychology of human learning, nor anything that connects other humanities fields to the technical arena. However, everything in the module was new to me. Before, I knew nothing about things like logic formalisation techniques, ontologies, knowledge engineering, knowledge management and others.

During the module

Over the course of the module, week after week, I was being introduced to new concepts, subjects and practices, and it was upon me to further grasp them and understand where they fit in my own existing 'knowledge model'; what all of this has to do with my work in business intelligence, my interests in data science, machine learning, deep learning and – of course – what it has to do with artificial intelligence (AI). The sequential novelty of topics that I was being introduced to kept delaying the formulation of such holistic understanding until the end. From the aspirations of the semantic web, and how it can be formulated through semantic layers, to Resource Description Framework (RDF), ontologies and their various languages, like Web Ontology Language (OWL), all the way through logic, propositional logic (PL), first-order logic (FOL), logic programming and Protégé software, the open-source ontology editor.

My consistent feeling throughout the module was that I was not getting to a level of understanding that was satisfactory to me in any of these topics, so I tried, as much as I could, to look for additional resources, some of which I documented in my weekly reflections. What was more concerning for me was to connect all that I was learning with my work domain, which is business intelligence. Connecting between what I was learning and my work was a pursuit that was formulated through two main questions that I developed throughout the module and kept lingering on my mind till the very last week. The two questions are:

  1. How do ontologies – and knowledge representation in general – relate to relational and non-relational databases and data models?
  2. Do ontologies – and knowledge representation in general – still have a place in the age of large language models (LLMs) and AI agents?

The trace for my first question can be seen starting from my reflection on Unit 3 (mainly covering FOL). The second can be seen starting from Unit 9 when I had to start a discussion post answering the prompting question, "Which language do you believe is the most useful to express ontologies that can be utilised by software agents on the WWW: KIF, OWL2, RDF or OWL-lite?" The second one was more alarming for me as I was afraid that I was spending three months of my life studying something that had gone outdated, which, to my relief, I later discovered was not the case. I was able to reach answers for my two questions at the very end of the module.

For my first question, the answer that I reached is concisely summarised in one paragraph from the closing chapter, by Schreiber (2008) of the module's textbook Handbook of Knowledge Representation, which was on the reading list for Unit 12, stating that a data model assumes a closed world among entities, while ontologies intend to lay the foundation for shared understanding of concepts, their relations and axioms among various human beings and systems (Schreiber, 2008). As such, a data model can be further proliferated as an ontology if needed.

Linking to my second question, I came across an article, also in the very last week, that extrapolates on the state of semantic layers, ontologies and knowledge graphs in 2026 in the context of LLMs and AI agents. In addition to providing a historical overview that I needed on how semantic layers helped to unify the definition of measures in the domain of business intelligence, it also argued for the importance of ontologies to serve as a rigorous reality guardrail for LLMs and AI agents (Talisman, 2026). The last argument was a perspective that I had also learned when I did my research for my initial post for the collaborative discussion of Unit 9, in which I read, and therefore argued, that OWL2 and JSON-LD will be needed to maintain useful LLMs and AI agents (Magana and Monti, 2025).

Conclusion

The module did not meet all my expectations, and I am still struggling to connect all that I learned over the three months with my work. However, my effort of establishing such a connection, along with the required readings and additional resources, helped me formulate two questions, which I now find an extent of satisfaction that I was able to reach answers for. I know that I am not ready yet to introduce ontologies to my workplace, but I believe KRR is a domain that now exists in my mind, and that I now follow its related discussions, and will be thinking about moving forward, looking for an opportunity to make use of what I learned about it to solve real-world problems and live questions that I face, especially around the consistently challenging task of constructing shared realities among groups, whether work teams, developers and users, or people belonging to various cultural references.


Reference List

Schank, R.C. and Abelson, R.P. (1977) Scripts, plans, Goals and Understanding : an Inquiry into Human Knowledge Structures. John Wiley.

Schreiber, G. (2008) 'Knowledge Engineering', in F. van Harmelen, V. Lifschitz, and B. Porter (eds) Handbook of Knowledge Representation. United Kingdom: Elsevier B.V., pp. 929-946.

Magana, I. and Monti, M. (2025) Enhancing Large Language Models through Neuro-Symbolic Integration and Ontological Reasoning, arXiv.org. Available at: https://arxiv.org/abs/2504.07640v1 (Accessed: 23 December 2025).

Talisman, J. (2026) 'Ontologies, Context Graphs, and Semantic Layers: What AI Actually Needs in 2026', Metadata Weekly, 22 January. Available at: https://metadataweekly.substack.com/p/ontologies-context-graphs-and-semantic (Accessed: 23 January 2026).