Artefacts
Collaborative Learning Discussion 1
The collaborative discussion we started in Unit 1 is about what has led to the rise of agent-based systems and what benefits organisations are gaining out of it. This week, it was the time to write a summary post, summarising our initial posts and peer responses that we got. I received only one peer response from a peer called Fatema. Below, I summarise what I argued for and what I learnt from the discussion.
Summary PostIn my initial post, I aimed to unfold the multitude of factors that have led to the rise of agent-based systems (ABS), from internal technological factors, like the consequent development of natural language processing (NLP) and large language models (LLMs) with the overall growth of computational capabilities (Priyadarshi, 2025), to market-specific and overall economic aspects, related to adaptability with highly dynamic and distributed environments (Joshi, 2025), especially when compared to traditional software architectures. I also argued that despite all the sparkling promises that ABS presents to businesses, their success will be hinged on their effectiveness in increasing revenues and reducing costs.
In her response, Fatema agreed with my overall approach, adding to the discussion further points on the limitations and dangers of adopting ABS. She pointed out the importance of agents monitoring and governance with predefined operational boundaries and emergency shutdown that allows human intervention when abnormal behaviour is detected. Furthermore, Fatema also stressed the importance of human-in-the-loop and human-on-the-loop principles to prevent over reliance on automation and balance efficiency with accountability. Going back to the underlying fundamentals of any data-driven system, Fatema ended her contribution with the mention of data quality criticality as well as how pivotal testing phases of agentic systems can be.
I agree with what Fatema added to the discussion. I would further extend her contribution with the mention of limitations and dangers of unpredictability related to ABS, especially when scaled to multi-agent systems (MAS), which were all illuminated by another colleague in her initial post. In response to these concerns, I pointed out accumulative body of research showcasing the benefit of adopting logic formalisation and ontologies for 'safer' and reliable MAS. For example, Zaki et al. (2021) propose a modelling paradigm for self-certification of autonomous systems, based on ontologies enriched with "expressive semantic relationships and modality constraints to relate finite state automatons together". Felicíssimo et al. (2005) also have a similar proposal that uses ontologies to define regulations over roles in open MAS, with five main concepts that are Role, Norm, Penalty, Action and Place. They claim that their proposed ontology's structure "provides a semantic support for agents to base their behaviour according to norms and to reason about action selection."
Overall, I still see that despite all of the increasingly shiny promises that ABS hold to businesses, their success is still to be widely proven by financial gain for profit institutions, and - to add after this discussion - the ability of institutions to foresee and mitigate their limitations and dangers, especially the dangers that themselves are the source of some profits, like the promise of reducing labour cost, in which the danger of over reliance on automation - which Fatema mentioned - resides.
Reference List
Felicíssimo, C. et al. (2005) ‘Normative Ontologies to Define Regulations over Roles in Open Multi- Agent Systems’, in AAAI Fall Symposium Series. - AAAI 2005 Fall Symposium on Roles, an Interdisciplinary Perspective, Menlo Park, California, USA: AAAI Press. Available at: https://cdn.aaai.org/Symposia/Fall/2005/FS-05-08/FS05-08-011.pdf (Accessed: 6 February 2026).
Joshi, S. (2025) ‘Review of Autonomous and Collaborative Agentic AI and Multi-Agent Systems for Enterprise Applications’, International Journal of Innovative Research in Engineering and Management (IJIREM), 12(3), pp. 65–76. Available at: https://doi.org/10.55524/ijirem.2025.12.3.9.
Priyadarshi, M. (2025) ‘Autonomous AI Agents Transforming Enterprise Operations: From Static Automation to Intelligent Decision-Making Systems’, Sarcouncil Journal of Multidisciplinary, 5(7), pp. 863–870. Available at: https://doi.org/10.5281/zenodo.16408261.
Zaki, O. et al. (2021) ‘Reliability and Safety of Autonomous Systems Based on Semantic Modelling for Self-Certification’, Robotics, 10(1), p. 10. Available at: https://doi.org/10.3390/robotics10010010.