Intelligent Agents - Unit 3: Agent Architectures

Overview

As per the course website, "This week’s lecturecast focuses on the different approaches available when deciding on an agent architecture. There is a discussion of the history of agents and their development, and how this has led to a number of key architectures. We will explore symbolic reasoning agents, agents with state, reactive architectures and more to understand the value of relative merits of the different approaches."

My Reflection

Overall Reflection

This unit was focused on agent architectures, which are the components that allow an agent to operate, or - in other words - they are the components where the agent operate. These components can be either software or hardware.

The unit included a lecturecast, a list of readings and the requirement of concluding the collaborative discussion with a summary post. The lecturecast explained what agent architecture is and touched on three schools of thought on types of agents and their view for agent architectures. These are the schools of symbolic reasoning, reactive agents and hybrid agents. The school of symbolic reasoning bases agent architectures on excplicit logical reasoning, while the scool of reactive agents argues against explicit logical reasoning as a way of developing intelligent automous agents and argues for focusing developing the ability to react to the agent environment. The third more recent school of thought is to combine the previous two. The reading list comprised five legacy papers from these three schools of thought from the eighties and nineties.

For the group project to delivered in Unit 6, I was able to connect with two of the other colleagues, out of four, and we are still to schedule our first meeting. For the collaborative discussion summary post, I added it here in the artefacts.

Readings Reflection

The most interesting reading for me was Rodney Brooks's 1991 paper Intelligence Without Representation. In this paper, Brooks argues against the traditional approach to AI that relies on explicit knowledge representation and reasoning. He proposes a new approach that is based on the idea of building intelligent agents - or "Creatures" as he calls his - that can learn from their environment and adapt to new situations, which represents the reactive agents school of though as mentioned in the lecturecast.

I found Brooks especially interesting, starting from his quotes mentioned in the lecturecast, and also for his paper on the reading list. Even the writing style and tone of the paper was different. I intend to learn more about his thought, so I started this with following him on LinkedIn.

Below are some quotes that I specially liked or found though-provoking:


Reference

Brooks, R.A. (1991) 'Intelligence Without Representation', Artificial Intelligence, 47(1-3), pp. 139-159.

Artefacts

Collaborative Learning Discussion 1

The collaborative discussion we started in Unit 1 is about what has led to the rise of agent-based systems and what benefits organisations are gaining out of it. This week, it was the time to write a summary post, summarising our initial posts and peer responses that we got. I received only one peer response from a peer called Fatema. Below, I summarise what I argued for and what I learnt from the discussion.

Summary Post

In my initial post, I aimed to unfold the multitude of factors that have led to the rise of agent-based systems (ABS), from internal technological factors, like the consequent development of natural language processing (NLP) and large language models (LLMs) with the overall growth of computational capabilities (Priyadarshi, 2025), to market-specific and overall economic aspects, related to adaptability with highly dynamic and distributed environments (Joshi, 2025), especially when compared to traditional software architectures. I also argued that despite all the sparkling promises that ABS presents to businesses, their success will be hinged on their effectiveness in increasing revenues and reducing costs.

In her response, Fatema agreed with my overall approach, adding to the discussion further points on the limitations and dangers of adopting ABS. She pointed out the importance of agents monitoring and governance with predefined operational boundaries and emergency shutdown that allows human intervention when abnormal behaviour is detected. Furthermore, Fatema also stressed the importance of human-in-the-loop and human-on-the-loop principles to prevent over reliance on automation and balance efficiency with accountability. Going back to the underlying fundamentals of any data-driven system, Fatema ended her contribution with the mention of data quality criticality as well as how pivotal testing phases of agentic systems can be.

I agree with what Fatema added to the discussion. I would further extend her contribution with the mention of limitations and dangers of unpredictability related to ABS, especially when scaled to multi-agent systems (MAS), which were all illuminated by another colleague in her initial post. In response to these concerns, I pointed out accumulative body of research showcasing the benefit of adopting logic formalisation and ontologies for 'safer' and reliable MAS. For example, Zaki et al. (2021) propose a modelling paradigm for self-certification of autonomous systems, based on ontologies enriched with "expressive semantic relationships and modality constraints to relate finite state automatons together". Felicíssimo et al. (2005) also have a similar proposal that uses ontologies to define regulations over roles in open MAS, with five main concepts that are Role, Norm, Penalty, Action and Place. They claim that their proposed ontology's structure "provides a semantic support for agents to base their behaviour according to norms and to reason about action selection."

Overall, I still see that despite all of the increasingly shiny promises that ABS hold to businesses, their success is still to be widely proven by financial gain for profit institutions, and - to add after this discussion - the ability of institutions to foresee and mitigate their limitations and dangers, especially the dangers that themselves are the source of some profits, like the promise of reducing labour cost, in which the danger of over reliance on automation - which Fatema mentioned - resides.


Reference List

Felicíssimo, C. et al. (2005) ‘Normative Ontologies to Define Regulations over Roles in Open Multi- Agent Systems’, in AAAI Fall Symposium Series. - AAAI 2005 Fall Symposium on Roles, an Interdisciplinary Perspective, Menlo Park, California, USA: AAAI Press. Available at: https://cdn.aaai.org/Symposia/Fall/2005/FS-05-08/FS05-08-011.pdf (Accessed: 6 February 2026).

Joshi, S. (2025) ‘Review of Autonomous and Collaborative Agentic AI and Multi-Agent Systems for Enterprise Applications’, International Journal of Innovative Research in Engineering and Management (IJIREM), 12(3), pp. 65–76. Available at: https://doi.org/10.55524/ijirem.2025.12.3.9.

Priyadarshi, M. (2025) ‘Autonomous AI Agents Transforming Enterprise Operations: From Static Automation to Intelligent Decision-Making Systems’, Sarcouncil Journal of Multidisciplinary, 5(7), pp. 863–870. Available at: https://doi.org/10.5281/zenodo.16408261.

Zaki, O. et al. (2021) ‘Reliability and Safety of Autonomous Systems Based on Semantic Modelling for Self-Certification’, Robotics, 10(1), p. 10. Available at: https://doi.org/10.3390/robotics10010010.