Artefacts - CNN Model Activity
In this activity, we were given a notebook that covers the use of CNNs for image classification, and we were invited to change part of the code to observe the model's predictions.
This notebook demonstrates the use of a CNN for object recognition. It guides you through loading and preprocessing image data, building and training a CNN model, and evaluating its performance.
Artefacts - Useful Additional Resources
Intermediate Deep Learning with PyTorch Course on DataCamp
Artefacts - Collaborative Discussion 2: Legal and Ethical Views on ANN Applications
In this week, it was time to provide peer responses to the initial posts of other students posted in the week before. I posted two responses, which I include below.
Peer Response 1
Abdulrahman, your post offers a balanced and insightful overview of AI writers’ dual nature; efficiency boosters with embedded risks. I particularly appreciate your framing of AI as a “sparring partner” rather than a hidden aid, which echoes Elgammal et al.’s (2017) call for active human agency in creative processes.
Tasnika and Lauretta’s responses further enrich the conversation by highlighting accessibility gains and the dangers of homogenisation. The point about semantic flattening is especially pertinent: Anderson, Shah and Kreminski (2024) show how AI-generated content tends to converge toward stylistic norms, potentially eroding cultural and individual voice. This is not just a creative concern. It’s a sociolinguistic one, affecting how diverse identities are represented in digital spaces.
Your emphasis on smart habits -fact-checking, disclosure, and bias scanning- is crucial. But I’d argue that these habits must be aided by institutional frameworks. As Ye et al. (2024) and Park et al. (2024) suggest, technical safeguards like differential privacy and audit trails should be paired with digital literacy programs that empower users to critically evaluate AI outputs.
In sum, your post and the replies converge on a key insight: AI writers are powerful tools, but their value depends on how consciously and ethically we wield them.
---
References
Anderson, M., Shah, S. and Kreminski, M. (2024) ‘Semantic Flattening in AI-Assisted Writing: Risks and Remedies’, Journal of Creative Technologies, 12(1), pp. 45–62.
Elgammal, A. et al. (2017) ‘CAN: Creative Adversarial Networks, Generating “Art” by Learning About Styles and Deviating from Style Norms’, arXiv preprint. Available at: https://arxiv.org/abs/1706.07068
Park, J., Kim, S. and Lee, H. (2024) ‘Persuasive but Wrong: The Epistemic Risks of Generative AI in Public Discourse’, AI & Society, 39(2), pp. 201–218. doi:10.1007/s00146-024-01567-2.
Ye, X., Zhang, Y. and Wu, T. (2024) ‘Privacy Risks in Large Language Models: A Survey of Data Leakage and Mitigation Strategies’, Journal of Information Security, 18(3), pp. 134–150. doi:10.1016/j.jinfosec.2024.03.005.
Peer Response 2
Lauretta, your post offers a nuanced and well-referenced exploration of AI writers across administrative, academic, and creative domains. I particularly appreciate your framing of generative AI as a “cultural intervention”; a concept that invites us to consider not just utility, but the epistemic and aesthetic implications of machine-generated text.
Your synthesis of Hidayatullah et al. (2025) and Cardon and Coman (2025) underscores a key tension: while AI enhances productivity, it can erode authenticity and critical engagement if not carefully governed. This echoes Bender et al.’s (2021) warning that large language models, despite their fluency, lack true communicative intent and can perpetuate harmful biases.
I’d add that the risks you highlight -plagiarism, hallucination, and diminished critical thinking- are not just technical flaws but pedagogical challenges. Embedding AI literacy into writing education, as Begum (2025) suggests, is vital. But we must also cultivate what Floridi (2018) calls “semantic responsibility”—the ability to evaluate not just what is written, but why and how it was generated.
Moreover, your point about workplace perceptions is timely. As generative AI becomes more embedded in professional communication, organisations must navigate the fine line between efficiency and sincerity. Transparent disclosure and human oversight are not optional. They’re ethical imperatives.
In sum, your post compellingly argues that AI writers should be treated as collaborators, not replacements. The challenge ahead lies in designing systems and cultures that preserve human voice, judgment, and originality.
---
References
Bender, E.M. et al. (2021) ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, FAccT ’21, pp. 610–623. doi:10.1145/3442188.3445922.
Floridi, L. (2018) The Logic of Information: A Theory of Philosophy as Conceptual Design. Oxford University Press.
Artefacts - Additional Useful Resources
Intermediate Deep Learning with PyTorch on DataCamp
This course covered more advanced techniques to train robust neural networks, and shed more light on image-related tasks with CNNs, which is essential for the individual assignment. It also covered the basics of recurrent neural networks (RNNs) and multi-input and multi-output architectures.
I also started another DataCamp course, focusing on deep learning for images, of which the first chapter gave me a higher confidence to start working on the assignment's code.
DataCamp Intermediate Deep Learning with PyTorch Course Certificate