News

Forum: Steps taken to reduce the risks of AI and its potential harms

I thank Mr Peh Chwee Hoe for his letter

I thank Mr Peh Chwee Hoe for his letter (May 22). Governments around the world would like to see artificial intelligence (AI) deployed and used in responsible and ethical ways, so that citizens can enjoy its benefits safely and with confidence. Singapore is no different. With the wide range of AI applications, there is no one solution that addresses all AI-related risks. However, we endeavour to meaningfully reduce the risks of AI and its potential harms. Our pioneering Model AI Governance Framework lays out the key principles expected in the design of AI systems. For example, they should be human-centric, explainable, and transparent. We will soon launch a framework specifically to cover generative AI systems, as they have gained wider adoption. Among other dimensions, it will call for greater accountability along the AI development chain, including a clearer allocation of responsibilities and stronger safety nets. We are also working with developers to better manage risks before they materialise. Our open-source AI Verify testing toolkit allows developers everywhere to validate the performance of their AI systems against our governance frameworks. This strengthens our ability to anticipate and tackle emerging risks, especially through working with researchers at the scientific frontier, and with like-minded policymakers worldwide to build consensus around common tools, benchmarks, and standards. Harms like workplace discrimination and online falsehoods can already happen without AI. If AI is used to cause such harms, relevant laws and regulations continue to apply. We regularly review the adequacy of existing laws in the light of technological and international developments, and we have introduced amendments or new laws to plug the gaps. In recent years, new laws to tackle egregious content and crimes carried out online have been passed in Parliament. We are committed to ensuring that AI development serves the public good. We cannot foresee every harm, but an agile and practical approach can lower the risks and manage the negative effects of AI development. Most importantly, AI stakeholders – from developers to users – must also exercise responsibility and play their part.   Senior Director (National AI Group) Ministry of Communications and Information