News

Guidelines required for responsible use of gen AI in healthcare: SingHealth director

SINGAPORE – As generative artificial intelligence (gen AI) becomes more widely used in healthcare, the health sector must come together with the tech industry and regulators to devise guidelines for the use of the nascent technology.

SINGAPORE – As generative artificial intelligence (gen AI) becomes more widely used in healthcare, the health sector must come together with the tech industry and regulators to devise guidelines for the use of the nascent technology. These would allow for the development of innovations using gen AI while also ensuring its safe use, said Associate Professor Daniel Ting, director of SingHealth’s AI Office. He pointed to several uses of the technology in healthcare, such as quickly summarising clinical notes and other patient data, as well as proofreading medical literature. In the longer term, gen AI could even be used by physicians to generate diagnoses or prognoses, he said, noting, though, that a governing framework would have to be in place before such uses would be allowed here. Prof Ting added that any framework governing the use of gen AI in healthcare would have to evaluate several criteria, such as ensuring the responses generated are free from AI “hallucinations” – referring to inaccurate results generated due to factors such as insufficient training data – and aligned with clinical consensus, for example. Noting that regulations can often lag behind new technology, he said it is important to include regulators in the development process to ensure the safety and compliance of the end-product. “I think the key is to get the discussion going early,” he said. Prof Ting was speaking to reporters on the sidelines of a symposium titled Charting the Next Phase: Emerging Trends for AI in Health. Held at the Capella Singapore hotel in Sentosa, the symposium was part of the Asia Tech x Singapore event, which ends on May 31. When asked how such guidelines would fit in with the Model AI Governance Framework for Generative AI – which is being developed by the AI Verify Foundation and the Infocomm Media Development Authority – Prof Ting said a framework for gen AI in healthcare is still in the early conceptualisation stages here. Input from other parties, such as healthcare professionals, would be required as well before such a framework becomes a reality, he added. Pointing to the different potential applications of gen AI in the healthcare sector, Prof Ting said that different frameworks may be needed to oversee such varied uses. His presentation included findings from a paper in The Lancet Digital Health journal published on April 23. Prof Ting is the corresponding author of the study, which identified several challenges in medicine’s adoption of large language models (LLMs) – AI systems that can process vast amounts of text to understand and generate human language. These include the need for security measures to prevent the inadvertent exposure of identifiable patient data ingested during the training of such LLMs, as well as the need for patients to give explicit informed consent for their data to be used for such purposes. Another issue raised was that some of the data ingested when training LLMs might be used in violation of intellectual property laws. Developers should be transparent on the sources of training datasets used in developing LLM-based models where possible, the study stated. It noted a decentralised, blockchain-based market for medical research and publishing could promote data and workflow transparency, among other benefits, though this would require more research. Prof Ting also pointed to the lawsuit by The New York Times against OpenAI for copyright infringement, noting that the ChatGPT developer had argued its use of the media outlet’s content was justified as “fair use”, which allows creators to build upon copyrighted work.  While different countries may have different views on the matter, any decision in the US could change the field, he said. “It’s a space to pay close attention to,” he said.