My Reflection
Overall Reflection
The first week started with an introduction to the concept of agents, watching a seminar by the module's tutor,
getting through a lecturecast, a reading of Chapter 2 from one of the module's textbooks An Introduction to
MultiAgent Systems by Wooldridge (2009). Despite that I got initially worried that the textbook would be
outdated, given the continuous development in the field of AI agents, the chapter turned out to a great read,
offering a concise and open-minded introduction to the topic.
In addition, the module also included a start of a collaborative discussion on the needs and benefits of
agent-based systems, for which I wrote an anitial post that I document here as well in the artefacts section
below.
Finally, as pointed out by the module tutor, we were assigned to groups that are expected to self organise and
pepare for a group submission in Unit 6. So, I sent an email first to the tutor to get a clear list of the
colleagues emails, and then sent them an email, inviting them to connect over Teams.
Readings Reflection
Chapter 2 of Wooldridge (2009) covers a wide ground, tackling different topics, to discuss the concept of
agents. It started from the hardships of defining what an agent is, providing a general working definition of
the term as a "a computer system that is situated in some environment, and that is capable of autonomous action
in this environment in order to meet its delegated objectives."
It clarified that the definition applies to systems that we would not normally consider as agents, including
control systems, such as a thermostat, and software demons, like the one detecting the presence of unread
emails. Still, we would not consider those as 'intelligent' agents. Wooldrige argued that intelligence would
require the agents to have the traits of reactivity, proactivity and social ability. The chapter, therefore,
proceeded to cover the meaning and entricate challenges of understanding each of those traits, as well as the
notion of autonomy, which is a fundamental charachteristic of any system to be considered as an 'agent'.
Reflecting on these traits, the discussion got me thinking how 'non-intelligent agents' can be reactive and
proactive, like a thermostat, but would have no 'social ability', and therefore started thinking of how
significant, and complex, this trait is.
The chapter also touched upon the classification of environment properties, citing the Russel and Norvig (1995).
Additionally, Wooldridge covered the debate around understanding intelligent agents in regards of 'intentional
stance', describing them with attributes that are usually given to humans, like 'understand', 'know', 'want',
'decide', 'believe'... etc. and cleverly pointed out that these attributes are usually useful when we do not
understand or know the entricacies of something completely, and similar to the human tendency for animism. The
author also discussed the relation between intentional stance and physical and design stances. This was the most
interesting bit of the reading for me, for its philosphical and linguistic entricacies, and I intend to learn
more about this debate when I get the chance.
Overall, below are some quotes that I specially like from the reading:
-
"The first thing to say is that autonomy is a spectrum. At one extreme on this spectrum are fully realized
human beings, like you and me. We have as much freedom as anybody does with respect to our beliefs, our goals,
and our actions. Of course, we do not have complete freedom over beliefs, goals, and actions."
-
"Agent architectures, of which we shall see many examples later in this book, are really software
architectures for decision-making systems that are embedded in an environment."
-
"[P]erhaps the most important distinction between agents and expert systems is that expert systems like MYCIN
are inherently disembodied."
-
"The attitudes employed in such folk psychological descriptions are called the intentional notions.2 The
philosopher Daniel Dennett has coined the term intentional system to describe entities ‘whose behaviour can be
predicted by the method of attributing belief, desires and rational acumen.’"
-
"What objects can be described by the intentional stance? As it turns out, almost any automaton can."
-
"Put crudely, the more we know about a system, the less we need to rely on animistic, intentional explanations
of its behaviour – Shoham observes that the move from an inten-tional stance to a technical description of
behaviour correlates well with many models of child development, and with the scientific development of
humankind generally..."
-
"[W]hen faced with completely unknown phenomena, it is not only children who adopt animistic explanations. It
is
often easier to teach some computer concepts by using explanations such as ‘the computer does not know . . .’,
than to try to teach abstract principles first."
Reference
Wooldridge, M. (2009) ‘Intelligent Angents’, in An Introduction to MultiAgent Systems - Second Edition.
West Sussex, United Kingdom: John Wiley & Sons, pp. 21–48.
Artefacts
Collaborative Learning Discussion 1
Initiating the first collaborative discussion for this module, we were asked to write an initial post on what
has led to the rise of agent-based systems and what benefits are organisations realising from such an approach.
I did some additional readings and wrote the following:
Initial post:
Like any question related to the rise of a technology, to answer the question on what has led to the rise of
agent-based systems (ABS), one has to consider a multitude of factors, including – as one way of categorisation
– internal technological, market-specific and overall economic aspects, in addition to areas where they overlap.
The rise of ABS can be seen as a ‘natural’ development in AI technologies, from logic-based systems to expert
systems, then to symbolic ones that benefit from context protocols, natural language processing (NLP) and large
language models (LLMs), in addition to the broad underlying factors of growing computational capabilities
(Priyadarshi, 2025).
From a market-specific and overall economic aspects, ABS offers organisations with big promises, some of which
have been witnessed (Joshi, 2025). These promises can be better understood by juxtaposing ABS architectures with
traditional centralised software architectures, like any software that is not agentic. Traditional software
architectures struggle in highly dynamic and distributed environments as they cannot sense changing
circumstances and therefore cannot adapt their behaviour. This makes such software architectures too rigid to
help in real-life scenarios and require much human intervention, which hinders their benefit in the first place
(Priyadarshi, 2025). Centralised systems assume the world is stable, predictable and fully knowable, which is
arguably never the case in business world, and arguably increasingly so.
For example, financial markets are continuously changing and are influenced by so many economic, political and
social factors, leading them to be, in many times, volatile environments. Supply chains as well are well known
to be influenced by geo-political changes that in many cases lead to chains disruption. ABS adapt well to such
environments its premise is assuming that the world is dynamic, uncertain and distributed. These assumptions are
reflected in ABS essential traits, being autonomous, reactive, proactive and socially able, asserted by
Wooldridge (2009). Autonomous agent systems can make decisions way quicker than rigid software architectures
that provide only insights and stop at the point of waiting for human interpretation and decision making (Joshi,
2025). Additionally, those software architectures do not need to be re-engineered as frequently as traditional
ones, and efforts can be channelled to scaling and enhancing those agents to cover more tasks.
These traits promise organisations the benefits of being able to handle high-frequency, high-volume
environments, like markets, logistics and virtually every other sector. Agents can also sense-decide-act in real
time, reducing latency caused by human-centralised bottlenecks. Superseding human-central systems, ABS provide
stretched capabilities of monitoring, analysis and execution, handling complexity that exceeds human cognitive
limits, enabling organisations to model what-if scenarios that be computed manually and making decisions
accordingly (Priyadarshi, 2025).
Needless to say, these benefits appeal to organisations, not just to survive, but to generate more income and
reduce costs, which is the main driver, at least for profit-organisations in capital markets. These benefits
entail promises of rapid adaptation, eliminating costs of tricky and increasingly complex skills hiring and
development, and reducing labour costs (Joshi, 2025). This last point drives me to argue that despite the advent
of ABS can for sure be attributed to a multitude of factors, their success – especially in the business world –
will be hinged on whether they would prove to bring financial benefit over following traditional business
workflows.
Reference List
Joshi, S. (2025) ‘Review of Autonomous and Collaborative Agentic AI and Multi-Agent Systems for Enterprise
Applications’, International Journal of Innovative Research in Engineering and Management (IJIREM),
12(3), pp. 65–76. Available at: https://doi.org/10.55524/ijirem.2025.12.3.9.
Priyadarshi, M. (2025) ‘Autonomous AI Agents Transforming Enterprise Operations: From Static Automation to
Intelligent Decision-Making Systems’, Sarcouncil Journal of Multidisciplinary, 5(7), pp. 863–870.
Available at: https://doi.org/10.5281/zenodo.16408261.
Wooldridge, M. (2009) ‘Intelligent Angents’, in An Introduction to MultiAgent Systems - Second Edition.
West Sussex, United Kingdom: John Wiley & Sons, pp. 21–48.