Inclusive innovative technologies and machine learning for outreach, engagement and impact

8 Apr 2019 09:00h - 10:45h

Event report

 

Session moderator Mr Bendik Loevaas, a technology entrepreneur, opened the discussion by asking about the risk of inaccuracy when using automated technologies for inclusion. Loevass invited the representatives of the three start-ups in the field to comment on the differences between human and automated interpretation, as well as on the potential futures for automated interpretation and transcription.

Mr David Imseng, CEO at Recapp, noted that humans remain an integral part of the process, but that a certain level of automation enables greater accessibility. In the 1990s speech recognition was done via neural networks, and since 2010 the networks have become bigger and smarter due to deep machine learning. Automated processes today are not perfect, but 80-90% of accuracy is enough for a text to be useful for those that cannot participate in the discussions otherwise. The quality of these transcripts also depends on the type of the discussion, as automated software cannot fully follow a vibrant conversation that takes turns. Improving these processes will contribute to greater use of multilingualism online, Imseng remarked.

Another use of the machine learning is processing and indexing large amounts of data, which would be a costly and time-consuming task for human interpreters. While humans remain better at interpretation, if we want to index a 10-year-span of audio and video data, then speech recognition and indexing are very helpful. Word-by-word protocols are becoming out-dated, but this does not mean that there will be more automation at the expense of humans, as they remain a vital component of the task.

Mr Taaha Bin Khalid from VUME talked about the importance of presenters’ creating inclusive presentations and sharing knowledge more efficiently. His start-up, VUME, focuses on ensuring a 100% accurate content while delivering a live presentation. Bin Khalid explained research he conducted, aimed at detecting how many people with disabilities actually attend conferences, workshops, and other events. According to him, 1 in 5 people on the planet have a disability and at events 70% of participants with disabilities have an invisible disability. This means that they are hard to detect by a presenter. They are partial or situational disabilities such as not hearing the speaker from the back of the room. Six main types of disabilities are visual, hearing, cognitive, speech, mobility, and neural disabilities. VUME is currently giving workshops at public organisations, schools, and events to spread the knowledge of inclusive presentations and events.

Mr Kim Ludvigsen, CEO at Interprefy remarked that when talking about inclusion at private and public events, innovative software is replacing hardware that has been used over the last 50 years. From wires and telephones we have arrived at Voice-over-IP (VoIP) and the Internet as a part of the fourth generation of remote simultaneous interpretation (RSI). ‘Simultaneous interpreting today is cumbersome, it is costly and demanding’, Ludvigsen emphasised. RSI eliminates this out-dated process, cuts costs for the organisers, does not demand expensive equipment and turns any smart device into a tool for greater participation. The audio or video signal goes through the cloud, to the interpreters that can be anywhere in the world, the interpretation goes back through the cloud and is live delivered on any smart device of the participants. This technology open possibilities for multilingualism as there is no limit to the number of languages that can be simultaneously interpreted. It further bridges the remoteness between the venue and the online attendees, and overall contributes to global knowledge creation. Automation makes simultaneous interpretation cheaper, and as the costs go down, the demand goes up, and so many events are turning to RSI. ‘Interpreters will not be replaced by technology, but by interpreters using technology’, Ludvigsen stressed.

 

By Jana Mišić