By, Inah
KUALA LUMPUR, 4 March 2025: An exclusive workshop on ‘Safeguards against Generative Artificial Intelligence (AI),’ was organized by the Communication and Multimedia Content Forum of Malaysia (CMCF) in collaboration with Google’s Trust & Safety APAC on 20 February 2025. The workshop was held at Co-labs Ken TTDI, Petaling Jaya, Malaysia.
One of the speakers, Andri Kusumo from Google’s Trust & Safety APAC exposed participants to the safeguards in generative AI and taught about recent developments in the space. Participants came from amongst the academia, the industry, media, nongovernmental organisations or civil societies, and parents. One focal point of the discussion was Google’s approach to designing AI systems with integrated safety features. Moreover, participants engaged in a lively exchange of insights, delving deeply into the evolution of artificial intelligence and contemplating its future direction. This engaging dialogue highlighted the transformative journey of artificial intelligence from its inception to its current state, examining key milestones, technological advancements, and the ethical considerations that shape its development.
By sharing experiences and best practices, the participants not only underscored the importance of safety in AI development but also fostered a collaborative spirit aimed at further enhancing these essential features. The discussion explored dedicated safeguards, such as content filtering, privacy settings, and user-friendly reporting mechanisms, all aimed at creating a safer online environment for young users.
Following the workshop, a closed roundtable session provided a platform for an open and candid exchange among stakeholders from various sectors. This session was characterized by a robust dialogue, where participants were encouraged to express their concerns and perspectives relating to the pressing issues surrounding AI. Moreover, the session also addressed the sensitive issue of Child Sexual Abuse Material (CSAM) and the challenges AI poses in its detection and prevention. Stakeholders emphasized the importance of developing ethical AI frameworks that prioritize the safety of children online while ensuring adherence to privacy rights. The discussion created avenues for collaboration among various entities, including tech companies, law enforcement, and advocacy groups, to create more robust systems that can effectively identify and mitigate the risks associated with CSAM.***
(These are views of Inah, a PhD student affiliated with the Department of Communication, AbdulHamid AbuSulayman Kulliyyah of Islamic Revealed Knowledge and Human Sciences (AHAS KIRKHS), International Islamic University Malaysia (IIUM), which do not represent IIUM Today’s.)
- Double Win: Best Paper and Best Presenter Awards at ICoGESD 2025 - December 3, 2025
- AHAS KIRKHS ‘Ibādah Camp 2025: Empowering the Role of Murabbi for a Resilient Ummatic Future - December 1, 2025
- GALEP 2.0 – Fostering Global Academic Leadership - November 27, 2025