IBM’s AI suitcase tests navigation tech for the visually impaired
TOKYO – Dr Chieko Asakawa was travelling and negotiating her surroundings, luggage in hand, when a thought struck her: What if her luggage could scan the area around her for her, and be her guide?
- by autobot
- Aug. 4, 2024
- Source article
Publisher object (23)
TOKYO – Dr Chieko Asakawa was travelling and negotiating her surroundings, luggage in hand, when a thought struck her: What if her luggage could scan the area around her for her, and be her guide? That was back in 2017, and even then, the visually impaired scientist was no stranger to turning big ideas into reality. A fellow and leader of accessibility research at technology company IBM, Dr Asakawa is the creator of a self-voicing web browser for visually impaired users, which earned her fame and induction into American organisation National Inventors Hall of Fame in 2019. Home Page Reader, launched in 1997, lets users surf the internet by converting text and icons to speech. It became the most widely used web-to-speech system at the time, before being removed from sale years later. With a team of around 10 other scientists and about a year later in 2018, the first iteration of Dr Asakawa’s ideal suitcase was created. The team has since made several improvements to it, partially due to feedback it has received from around 900 people in Japan and California, the latter during an accessibility conference in 2023. Some upgrades that have been made include a smaller battery that powers the suitcase for longer, a threefold increase in the number of RGB (red green blue) cameras to have 360-degree coverage of its surroundings, and a more powerful and energy-efficient motor. “The idea is for it to take visually impaired people wherever they want to go, to unfamiliar places they have never been,” said Mr Masashi Oikawa, program manager of accessibility research at IBM Research Tokyo. He was addressing reporters during a media visit to IBM’s Think Lab in Tokyo in June. Weighing 15kg and standing at about a metre tall, the suitcase sports a speaker which allows an artificial intelligence (AI) voice assistant to give information to its user, and a microphone that allows the user to give instructions on where he or she wants to go. It also features a light detection and ranging technology (lidar) system to obtain information on non-moving objects in the surroundings to understand its position relative to them, and a colour and depth camera that can detect humans in its vicinity. The suitcase also contains a central and graphics processing unit, motion sensor and a tactile device. To take advantage of the suitcase’s capabilities, users must first pair it with an app on their iPhone. By selecting a location on the app via a menu or voice interaction, they can communicate to the suitcase where they want to go. The app is proprietary at the moment, so it does not have a specific name, said Mr Oikawa. However, he added that it should be ready for downloading on the Apple App Store when the suitcase is made available commercially. AI software embedded in the suitcase also allows the navigation tool to make the necessary calculations and take its user on the safest route – such as guiding them towards a lift instead of an escalator – in a crowded mall. The suitcase can also pick up on and avoid static and dynamic obstacles in its path, and match the user’s walking pace to the people around. “The very first version we created, we called it the vacuum cleaner, since it was so noisy because of the multiple fans installed to cool the heat generated inside,” Mr Oikawa told ST. The latest iteration is much quieter and emits a low-humming vibration when in use, allowing any information it conveys verbally to be heard clearly. The suitcase has so far been positioned as an AI navigation tool for the visually impaired in indoor environments such as museums, shopping malls, stadiums, airports and hospitals. Since April 2024, visitors to the National Museum of Emerging Science and Innovation in Tokyo have been able to rent the suitcase for free. Each suitcase costs more than US$30,000 (S$39,800) to develop currently. This cost includes resources to create the digital twin of a building for navigation. “We had previously created a digital map of one big shopping mall in Tokyo. The shopping area consists of five floors, and is approximately 22,000 sq m. This took about one day,” said Mr Oikawa. The challenge now is for the team to take the suitcase outdoors, said Mr Oikawa. There are more unpredictable variables in outdoor settings, and these include road conditions, weather, cars, bicycles and traffic lights that the suitcase has not been trained to recognise and respond to. Dr Asakawa has big dreams for the navigation technology to be applied to other assistive tools such as wheelchairs and shopping carts. “That’s why we are making our software open-sourced, and we will encourage other companies and developers to utilise it to scale,” she added. Dr Asakawa does not write code. To develop the suitcase, she provides the concept of what she envisions for it and how it should operate. Then, a group of software engineers write the code for it, the hardware is developed, and Dr Asakawa tests it out. The cycle is repeated each time improvements are made. The first iteration of the suitcase was “primitive” in terms of design and functionality, said Dr Asakawa. Aside from how loud it was, it did not have current functions such as being able to connect to an iPhone and having 360-degree sensing capabilities. “I’m really grateful that we’ve been able to come this far,” said Dr Asakawa. “It’s thanks to technological advancements and our team effort. I’m hopeful and look forward to more opportunities ahead of us.” She added that the team’s current focus is the successful execution of public experimentation during Expo 2025 in Osaka, when around 3,000 guests are expected to try their hands at the suitcase and give their thoughts on it. The team members hope to find out what functional improvements need to be made in terms of both software and hardware, and share what they have with more people of different backgrounds. They also seek to gather data to understand the public’s level of acceptance towards a navigation robot like the AI suitcase. “This is important, because we need to make sure our solution is not threatening in any way when used in public spaces,” said Dr Asakawa. “People might feel that lidar and the camera may undermine their privacy, or a robot might be somewhat harmful to the human body.” “Increasing social acceptance is key to successful implementation,” she added.