OpenAI unveiled a new AI tool, Voice Engine, capable of generating lifelike speech by analysing a mere 15-second audio sample, as announced in OpenAI’s blog post. The tool aims to offer reading assistance, aid translation efforts, and provide a voice for nonverbal individuals with speech conditions. Despite its potential benefits, OpenAI acknowledges the serious risks associated with the technology, especially during an election year.
Voice Engine, developed by OpenAI in late 2022, has undergone private testing with a select group of partners who have agreed to usage policies requiring explicit consent from original speakers and prohibiting unauthorised impersonation. OpenAI stresses the importance of transparency, with partners mandated to disclose that the voices are AI-generated and that all audio produced by Voice Engine includes watermarking for traceability.
OpenAI advocates for responsible deployment of synthetic voices, suggesting measures such as voice authentication and a ‘no-go voice list’ to prevent misuse. The company recommends phasing out voice-based authentication to access sensitive information like bank accounts. However, the widespread release of Voice Engine remains uncertain as OpenAI seeks feedback and evaluates the results of its small-scale tests before deciding on broader deployment.
Why does it matter?
The introduction of Voice Engine comes amid rising concerns over AI-generated deepfakes and their potential to disseminate misinformation, particularly in political contexts. Recent incidents, such as a fake robocall imitating President Biden and an AI-generated video of a Senate candidate, underscore the urgency of addressing advanced AI technologies’ ethical and societal implications.
OpenAI, the Microsoft-backed company in AI, is gearing up to establish its presence in Tokyo this April, marking its foray into Asia as it expands its global operations. This move comes after the successful establishment of offices in London and Dublin last year, according to a source familiar with the matter who preferred to remain anonymous due to the sensitive nature of the information.
The decision to set up in Japan underscores the growing significance of the Asian market in AI development and adoption. OpenAI’s presence in Tokyo signifies a strategic move to tap into Japan’s burgeoning interest in AI technology, with major players like SoftBank Corp. and Nippon Telegraph and Telephone Corp. actively venturing into AI-driven services tailored for Japanese speakers.
The AI giant’s expansion plans in Japan have been in the pipeline for some time, with discussions intensifying after a meeting between OpenAI’s CEO, Sam Altman, and Prime Minister Fumio Kishida last April. Altman expressed the organisation’s intent to bolster its Japanese language services and collaborate with the government to address potential risks and establish regulatory frameworks.
Why does it matter?
OpenAI’s decision to establish a foothold in Tokyo reflects its commitment to catering to the Japanese market’s evolving needs and its broader strategy to engage with international stakeholders and foster AI innovation on a global scale. As the demand for AI-powered solutions continues to surge worldwide, OpenAI’s move into Tokyo signals a significant milestone in its quest to shape the future of AI.
Microsoft has outlined its commitment to safeguarding customer data privacy as businesses increasingly utilise generative AI tools such as Azure OpenAI Service and Copilot. In a blog post published on 28 March, the tech giant assured that customer organisations leveraging these services are protected under existing privacy policies and contractual agreements. Notably, Microsoft emphasised that organisations’ data is only utilised to train OpenAI models or foundational models if explicitly permitted by the users.
The tech giant clarified that customer data used in its generative AI solutions, including Azure OpenAI Service and Copilot, is not accessible for training open-source AI, addressing concerns raised by data privacy experts in the past. Furthermore, Microsoft affirmed that it does not share customer data with third parties like OpenAI without explicit permission, nor does it use it to train OpenAI’s foundational models. Any fine-tuned AI solutions resulting from organisations using their data will remain exclusive to them and not be shared externally.
The blog post highlights measures to protect organisations from copyright infringement lawsuits related to using Azure OpenAI and Microsoft Copilot services. Through the 2023 Customer Copyright Commitment plan, Microsoft pledged to defend customers and cover settlements in the event of copyright infringement lawsuits, provided customers utilise available guardrails and content filters within the products.
In addition to copyright protection, Microsoft is focused on safeguarding sensitive data associated with AI usage. Chief Privacy Officer Julie Brill detailed how Microsoft Purview enables corporate customers to identify risks linked to AI usage, including sensitive prompts. Azure OpenAI and Copilot users can employ sensitivity labels and classifications to protect their sensitive data, with Copilot summarising content only when authorised by users. This integration ensures that Copilot-generated output inherits sensitivity labels from reference files, maintaining data protection policies and preventing unauthorised access.