My Reflection
Overall Reflection
This week was more focused on submitting the group assignment, but it also included a seminar, a reading item
and a formative activity, in addition to providing peer responses to the second collaborative discussion.
For the group assignment, we worked on a design proposal document for an agentic system that is built to perform
academic online search. In the previous weeks, we have already distributed the tasks, and my task was to develop
the third section of the document, covering agent model and design decisions. That made me read and learn about
Belief-Desire-Intention (BDI) model, as it was the chosen model for our agent architecture. It was interesting
to see a model that was first introduced in 1987, coming mainly from a philosophical background, still being
widely adopted in autonomous agents, as far as I could see from the literature that I found.
The seminar was a quick one on what is expected in the assignment. The reading item was Finin et al.
(1994) paper on KQML as an Agent Communication Language, which I have already read in the previous week
for my initial post in the collaborative discussion.
Regarding the formative activity, it was to crate an agent dialogue, using KQML and KIF. The collaborative
discussion, which we started in the previous week, was about the use of KQML as an
agent communication language versus other ordinary programming techniques like method invocation. Below in the
artefacts section, I post my work for both items, in addition to more details.
Reference List
Bratman, M.E. (1987) Intention, Plans, and Practical Reason. Harvard University Press.
Finin, T. et al. (1994) ‘KQML as an Agent Communication Language’, in Proceedings of the Third International
Conference on Information and Knowledge Management. Conference on Information and Knowledge Management,
New York, NY, USA: Association for Computing Machinery, pp. 456–463. Available at:
https://doi.org/10.1145/191246.191322.
Artefacts
Formative Activity
The activity requirement was as follows: "Create an agent dialogue, using KQML and KIF, between two agents
(named Alice and Bob). Alice is an agent designed to procure stock and Bob is an agent that controls the stock
levels for a warehouse. This dialogue should see Alice asking Bob about the available stock of 50 inch
televisions,
and also querying the number of HDMI slots the televisions have."
Below is my answer to the assignment:
Message 1 - Alice asks about stock levels:
Here, the agent uses the performative "ask-one", to make sure it receives only one answer, as opposed to
"ask-all" as well as the "val" operator for the binding of "?qty" variable to represent the quantity of stock
available.
(ask-one
:sender Alice
:receiver Bob
:language KIF
:ontology warehouse-stock
:content (val
(stock-quantity
(and
(item-type Televesion) (item-size 50)
)
)
?qty
)
)
Message 2 - Bob replies with stock levels:
Here, the agent uses the performative "tell", and the actual quantity value instead of the "?qty" variable.
Also, the dialogue now includes "in-reply-to" to link the reply to the original message.
(tell
:sender Bob
:receiver Alice
:language KIF
:ontology warehouse-stock
:in-reply-to msg1
:content (val
(stock-quantity
(and
(item-type Televesion) (item-size 50)
)
)
12
)
)
Message 3 - Alice asks about HDMI slots number:
This message echoes Message 1, but this time asking about HDMI slots count in the same type of televisions
instead of their stock count.
(ask-one
:sender Alice
:receiver Bob
:language KIF
:ontology warehouse-stock
:content (val
(hdmi-slot-count
(and
(item-type Televesion) (item-size 50)
)
)
?num
)
)
Message 4 - Bob replies with HDMI slots number:
In the same fashion, the agent Bob replies with the number of HDMI slots, using the "tell" performative.
(tell
:sender Bob
:receiver Alice
:language KIF
:ontology warehouse-stock
:in-reply-to msg3
:content (val
(hdmi-slot-count
(and
(item-type Televesion) (item-size 50)
)
)
3
)
)
Additional Notes
-
I used
:ontology warehouse-stock in all messages to ensure that the agents are using the same
shared vocabulary to interpret the content against. Without such a shared ontology,
stock-quantity and hdmi-slot-count would be just opaque symbols.
-
This exchange uses Knowledge Interface Formalism (KIF) as the content language. But since KQML is a
communication protofol that acts as the 'envelope' or the pragmatic layer, defining performatives and
metadata, KIF can be replaced with any other querying language, espcially if rooted in first-order logic, like
OWL, and the agent dialogue would still work.
Artefacts
Collaborative Discussion 2
Peer Response 1
The first post I responded to gave a clear explanation and comparison between KQML as an agent communication
language and method invocation techniques. The point that I found interesting in the post and that I aimed to
tackle was that the colleague mentioned that, "Because agents only need to agree on message structure and shared
vocabularies, KQML can connect heterogeneous implementations with less dependency on shared class libraries or
tightly versioned APIs." I found the mention of APIs in this context interesting, so I did a bit of further
reading and wrote the following response.
My Response
Your differentiation between KQML (Knowledge Query and Manipulation Language) and method invocation through
Python, Java or similar programming languages is clear and to the point. I like how you framed KQML as an 'outer
language', and therefore it allows for loose coupling and interoperability across distributed agentic systems
(Finin et al., 1994). I agree with you on these points and add that method invocation in Python or Java requires
shared code structures and known interfaces (Mayfield, Labrou and Finin, 2005).
I also acknowledge the same KQML disadvantages you mentioned. KQML and Agent Communication Languages (ACLs) in
general - being based on the theory of Speech Acts - are usually tricky to achieve its promised full
flexibility. KQML agents require a symmetric, shared ontology between agents for the communication to be
effective and correct. Without such shared ontology, communication may yield incorrect agent interpretations for
performatives.
Furthermore, I found your mention and contrast of our discussion with APIs (Application Programming Interface)
thought provoking. It made me think about where KQML stands in relation to APIs. Revisiting Finin et al. (1994),
I found the paper discussing KQML KRILs (KQML Router Interface Library). These are the APIs for KQML Routers,
which are content independent message channels, and that each KQML-speaking agent is associated has its own
process of. This router handles all KQML messages going to and from its associated agent. So, to sum up,
KQML-based agents speak through routers that have APIs (KRILs), and therefore KQML and APIs are not exclusive
contrasting terms to contradict and compare.
Thank you for your informative post that made me further assert what I know so far about KQML, and that
encouraged me to focus on the relationship between KQML and APIs.
Reference List
Finin, T. et al. (1994) ‘KQML as an Agent Communication Language’, in Proceedings of the Third International
Conference on Information and Knowledge Management. Conference on Information and Knowledge Management,
New York, NY, USA: Association for Computing Machinery, pp. 456–463. Available at:
https://doi.org/10.1145/191246.191322.
Mayfield, J., Labrou, Y. and Finin, T. (2005) ‘Evaluation of KQML as an Agent Communication Language’, in
Intelligent Agents II: Agent Theories, Architectures, and Languages. IJCAI’95-ATAL Workshop, Berlin,
Heidelberg: Springer.
Peer Response 2
The second post I responded to also gave a good explanation and contrast on the topic. The most interesting part
of the post was mentioning that adopting KQML aligns well with BDI-style agent models. Having read about BDI
model already for our group project, I again did some additional reading, and focused on this point in my
response.
My Response
Thank you for the clear explanation of Agent Communication Languages (ACLs), Knowledge Query and Manipulation
Language (KQML) and method invocation for communication among agents. I agree with you on the advantages and
disadvantages that you mentioned for each.
What is most interesting for me in your post is that you mentioned that the high-level of KQML abstraction suits
Belief-Desire-Intention (BDI) software model. First introduced by Bratman (1987), the model envisions autonomous
systems making use of three concepts for its functioning. The first concept is Beliefs, which represent what an
agent considers to be true, with an associated degree of certainty. The second is Desires, which mean the
high-level aims that an agent operates to achieve. Finally, Intentions are the concrete action plans that an
agent commits to realise the desires (Perez, 2019).
With this buildup, BDI model allows agents to work in a clear multistep workflow, building on their beliefs and
their evolvement over time, and therefore positions the agent in a better place for reasoning (Georgeff et al.,
1999). Your post further suggests that the separation between the belief, desire and intention components also
serve in better communication among agents, especially if coupled with KQML as a communication language.
I think your point in this regard makes sense, as long as all communicating agents are built up with the BDI
framework. KQML would be suitable here as they would help agents to communicate only intentions and let each
respective agent translate those communicated intentions according to its respective beliefs and in the light of
its own desires. It would also be important, I think, for those agents, to 'know' about the existence of other
agents in its multi-agent ecosystem as part of their beliefs, and have the cooperation with those as part of its
desires.
Reference List
Bratman, M.E. (1987) Intention, Plans, and Practical Reason. Harvard University Press.
Georgeff, M. et al. (1999) ‘The Belief-Desire-Intention Model of Agency’, in Intelligent Agents V: Agents
Theories, Architectures, and Languages. ATAL: International Workshop on Agent Theories, Architectures, and
Languages, Berlin, Heidelberg: Springer. Available at: https://doi.org/10.1007/3-540-49057-4_1 (Accessed:
7
March 2026).
Perez, A. (2019) ‘Leveraging the Beliefs-Desires-Intentions Agent Architecture’, MSDN, January. Available
at:
https://learn.microsoft.com/en-us/archive/msdn-magazine/2019/january/machine-learning-leveraging-the-beliefs-desires-intentions-agent-architecture
(Accessed: 7 March 2026).