April 15, 2026

Advancing Digital Growth

Pioneering Technological Innovation

Fighting online harms for a safer digital space in Singapore.

Fighting online harms for a safer digital space in Singapore.


By Carol Soon, Deputy Head and Associate Professor (Practice) at Department of Communications and New Media, National University of Singapore.

 

While digital technologies have increased opportunities for self-expression, civic engagement and community building, they have also exacerbated existing threats for individuals and deepen fissures within societies and between nation-states. In particular, the high adoption of social media has contributed to the growing prevalence of online harms, ranging from misinformation, disinformation, deepfakes to scams. Studies conducted in different jurisdictions including Australia, Canada, the UK and US, highlight the increase in people’s encounters with harms like hate speech, cyberbullying and harassment, image-based sexual abuse, and misinformation.

In recent years, real world events arising from increasingly fraught geo-politics are shaping online narratives and perpetuating misinformation and disinformation about different communities, further greying the boundary between the online and offline realms. For example, there was a surge of online vitriol against Singapore shortly after the Israel-Hamas war broke out, and almost two years later, developments in the war torn Middle East have contributed to hateful speech and behaviours towards the Jewish community in Singapore, in both online and offline spaces.

Advancements of AI technologies and their low barriers to adoption have enabled perpetrators to upscale online harms as well as create new ones. For example, “pig butchering”, a scam tactic, has become more sophisticated and convincing.

As reported by the Info-comm Media Development Authority (IMDA) in its annual report for 2024 and 2025, 99% of Singapore resident households are connected to broadband Internet in 2024. Given the high connectivity and adoption of technology in Singapore, such risks are not hypothetical ones, but are clear and present dangers.

A survey conducted by the Ministry of Digital Development and Information (MDDI) in 2025 found 84% of Singapore residents reported encountering harmful online content in the past year, with content supporting illegal activity, such as scams being the most frequently encountered harmful content, followed by sexual content, violent content, cyberbullying and content causing racial or religious tension.

Singapore’s approach to combatting Online Harms

Singapore has taken a multi-pronged approach to combat online harms, involving legislation, public education, and collaborative organisational efforts.

Regulation to Combat Online Harms

The government has enacted several laws targeting different forms of online harm:

In response to rising threats like hate speech, cyberbullying, self-harm, and online radicalisation, new laws were introduced in 2023:

  • Online Safety (Amendments) Act (OSA): Establishes a Code of Practice for Online Safety requiring designated social media platforms to implement system-wide measures that minimise users’ exposure to harmful content and provide accessible reporting channels. Under the OSA, six specified social media services must ensure users can easily report harmful content and that platforms proactively reduce access to such material.
  • Online Criminal Harms Act (OCHA): Provides authorities with both ex-ante and ex-post powers to swiftly address illegal activities online. This includes scams, cybercrime, unlawful gambling, drug-related offences, and other malicious activities. By enabling timely intervention, OCHA allows the government to remove or block access to content suspected of facilitating criminal acts.

Following examples from Australia, the EU, the UK, and the US, Singapore also introduced measures to ensure online services are age-appropriate for children, such as the Online Safety Code of Practice for App Distribution Services in early 2025. The government has recently passed the Online Safety (Relief and Accountability) Bill, which will establish an Online Safety Commission by mid-2026 to help victims obtain faster recourse.

Public Education and Digital Literacy

Besides regulation, the Singapore government has also spearheaded public education initiatives to promote online safety and cultivate greater digital literacy among the populace.  Notably, IMDA’s Digital for Life (DfL) movement, launched in 2021, brings together partners from the public, private, and people sectors to foster grassroots initiatives that enhance digital skills. Alongside DfL, other agency-supported efforts such as the National Library Board’s S.U.R.E. campaign, the Media Literacy Council, and the National Crime Prevention Council target different segments of the population, including youth, parents, and seniors.

These programmes aim to cultivate a safer online environment and equip citizens with the skills to navigate digital spaces responsibly.

Organisational and Industry Efforts

Social service organisations such as SG Her Empowerment, TOUCH Community Services, and Singapore Children’s Society have launched targeted programmes for vulnerable groups, including girls and young women, children, and low-income families.

Technology companies like Google and Meta have also contributed through public outreach campaigns, either independently or in collaboration with the government and social service sector, to help users understand online risks and adopt safer digital practices.

Closing the Gaps in Online Safety

In the face of rapid technological advancements, legislation must keep pace with emerging online harms. For instance, Artificial Intelligence (AI) has created new risks such as non-consensual synthetic intimate imagery, including deepfake pornography. A recent study by the Institute of Policy Studies (IPS) identified deepfakes as a severe harm. The government is considering amendments to Singapore’s Criminal Law to address large-scale electronic circulation of obscene materials and the use of AI in intimate images and child abuse content.

Effective communication with citizens is critical, as they are the primary beneficiaries of these laws. Research by the author and IPS’ Dr Chew Han Ei shows that while the current regulatory framework is robust, with important ex-ante and ex-post levers, the legal landscape can be confusing for the public. To improve clarity, laws could be organised according to the SMCR (Sender-Message-Channel-Receiver) model, which would:

  • Clarify the functional purpose of each law, even when provisions overlap;
  • Support public education by translating legal provisions into intuitive categories; and
  • Create a shared vocabulary for policymakers, regulators, platform providers, and civil society to align efforts and clarify accountability.

While regulations such as OSA, OCHA, and POFMA define the obligations of platforms in their duty of care to users, the first annual online safety report submitted earlier this year by each of the six designated social media services revealed lapses in implementation. Although all six platforms had safety measures in place, including content moderation and user tools for managing personal safety, some fell short in critical areas—particularly in protecting children and in the effectiveness of user reporting and resolution processes. The government has provided feedback on these reports. Moving forward, policymakers should mandate minimum standards for platform responsiveness, and social media services should address existing gaps in redressability and transparency.

Conclusion

Up until the mid-1990s, most governments paid little to no attention to the Internet; minimal regulation and intervention were considered important for societies to reap the promises of technology. During that era, utopic visions and mobilisation theories were core themes in the dominant narratives surrounding the Internet and its democratising potential. However, developments over the past decades have shown that participation in the online space is not equal, and technology can be easily manipulated by malicious actors to exploit both individual vulnerabilities and broader societal pain points.

The nature and intensity of online harms will continue to evolve as innovation in digital technologies progresses. To tackle online harms in a way that minimises the number of victims, multi-stakeholder and multi-disciplinary efforts must improve. Legislation must remain adaptive to the changing technological landscape; technology companies must be held accountable for lapses in addressing harmful content and perpetrators; and public communication should aim to close existing gaps between awareness and action.

About the writer

Dr Carol Soon is Associate Professor (Practice) in media regulation and digital policy at the Department of Communications and New Media at NUS, where she is also Deputy Head. She conducts research and outreach, and builds collaborations on public policy issues relating to media and technology regulation, digital literacy and skills, digital inclusion, web 3.0 governance, and public engagement. She also specialises in policy communication and the design of public engagement initiatives. Her work is supported by public, private and philanthropic institutions.

Dr Soon is Vice Chair of Singapore’s Media Literacy Council and a member of the World Economic Forum’s Global Future Council on Information Integrity. She works with public and private organisations on research and programmes relating to the use and governance of technology, public engagement and policy communication.

About the organisation

The Department of Communications and New Media (CNM) is a leading centre of communication and new media learning rooted in Asia and based on the principles of interdisciplinary & multi-disciplinary theory, research, and practice. CNM’s mission is Communication for Transformation: it aspires to transform student, discipline, sector and society through teaching innovation, research excellence, and knowledge translation.

CNM focuses on communication and new media studies integrating cultural studies, critical media studies, mass and computational communications, communication management and interactive media design. Engaging the intersections of humanities, arts, social sciences, computing, engineering, and design, CNM is the only department and programme in the Asia Pacific region to focus on critical, creative, qualitative, quantitative and computational research and pedagogy.

The views and recommendations expressed in this article published on November 2025 are solely of the author/s and do not necessarily reflect the views and position of the Tech for Good Institute.

link

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Newsphere by AF themes.