News

ST Picks: How safe is your data when Apple AI comes to the iPhone?

SINGAPORE - Starting later in 2024, Apple will be adding artificial intelligence (AI) to its iPhones, iPads and Macs. This includes a souped-up version of its Siri virtual assistant and allowing Siri to forward questions and prompts to OpenAI’s ChatGPT.

SINGAPORE - Starting later in 2024, Apple will be adding artificial intelligence (AI) to its iPhones, iPads and Macs. This includes a souped-up version of its Siri virtual assistant and allowing Siri to forward questions and prompts to OpenAI’s ChatGPT. The announcements, made at Apple’s 35th annual Worldwide Developers Conference (WWDC) from June 10 to 14 in California, have raised questions about how users’ personal data will be handled by the company, which has made privacy a hallmark of its brand. Could incorporating OpenAI’s technology result in users’ data being shared and used for AI model training? What is in Apple’s AI? The Straits Times answers these questions and others. At WWDC, It is a new suite of AI features that includes a more conversational and smarter Siri, as well as an image generator called Image Playground. Siri will be able to interpret commands and carry them out between both Apple and third-party apps, including helping users dig up a photo taken on a particular holiday, finding relevant files to attach to an e-mail, or scheduling a meeting mentioned in a text message. ChatGPT will also come to Apple products as a chatbot that Siri can turn to for more complex tasks, such as writing documents which may require general information that is less directly linked to the user. Siri will ask users for permission before it sends questions, documents or photos to ChatGPT. After ChatGPT responds, Siri will present its answers.  Meanwhile, Image Playground will feature metadata details stating whether images are created by AI in an effort to stop fake pictures being passed off as real ones. The Image Playground app will pop up in Messages or Notes to allow users to create cartoons and illustrations rather than photos, in another way to protect against these tools being misused. Image Playground will only work on the iPhone 15 Pro, iPhone 15 Pro Max, or iPad or Mac with an M1 or better chipset. Market observers said OpenAI is on Apple devices because the latter was too slow in the AI chatbot race.  Free AI chatbots by Google and Meta have already appeared in popular productivity and communication tools. For instance, Meta AI is now a fixture in most users’ WhatsApp accounts and accessible through search bars across Instagram and Facebook. Google has also integrated its AI chatbot Gemini (formerly Bard) into the messaging apps of Android users, who can also set Gemini as the default AI assistant on their mobile devices.  Apple said it was starting with the best chatbot, but that it would support other third-party chatbots down the road. Apple’s suite of AI tools is meant to be a personal assistant that draws on users’ private data in e-mails, calendars, text messages, pictures and apps, with processing of information done mostly on users’ devices. OpenAI, on the other hand, processes users’ information in a cloud if users decide to use ChatGPT through Apple.  However, OpenAI made an important concession as part of its agreement with Apple, agreeing not to store any prompts from Apple users or collect their IP addresses. This concession will be invalid if users connect and log in to an existing ChatGPT account on their Apple devices. Some users might want to do this to take advantage of their existing ChatGPT history or the benefits of ChatGPT’s paid account plans. Apple has kept personal data processing, such as FaceID verification, on users’ devices to limit risky exposure. Similarly, Apple Intelligence will process AI prompts directly on users’ devices using smaller AI models as much as possible. But some AI tasks, such as those involving images, may need more processing power. In such cases, Apple Intelligence will send users’ queries and data to a cloud computing platform controlled by Apple, where a more capable and larger AI model will fulfil the request. A major privacy breakthrough, announced at the 2024 WWDC, allows Apple’s cloud computing platform to run computations on sensitive data without itself being able to tell what data is being processed.  Apple’s new platform, dubbed Private Cloud Compute, borrows privacy-preserving concepts from the iPhone. After fulfilling a user’s AI request, Private Cloud Compute scrubs itself of any user data involved in the process, Apple said. Apple claims Private Cloud Compute is only possible because of its tight control over everything, from specialised, proprietary computer chips to software. In a technical document released on June 10, Apple said its AI models are trained on licensed data. Similar to the AI models of OpenAI, Meta and Google, Apple also uses public data on the internet for training, but did not specify what it is.  Web publishers who object to their content being used for training have been instructed to add specific codes on their websites to prevent Apple’s web crawler from collecting their data. “We never use our users’ private personal data or user interactions when training our foundation models,” Apple said. The firm also applies filters to remove personally identifiable information like social security and credit card numbers that are publicly available on the internet. Profanity and other low-quality content is also filtered out of the training corpus.