AI tool Silvia improves Spanglish transcription

A new AI assistant is addressing a common frustration for bilingual speakers by accurately transcribing ‘Spanglish,’ a blend of Spanish and English that often confounds other language processing tools. Developed by Mansidak Singh, a product engineer at reAI, Silvia allows users to fluidly switch between languages in a single sentence without losing any context. Singh was inspired to create the app after a conversation highlighted the limitations of existing language assistants, which typically ignore or misinterpret mixed-language input.

Silvia integrates with your keyboard and supports both Spanish and English, with plans to expand to other languages such as French, German, and Dutch soon. Singh utilised iOS 18’s new Translation API and OpenAI’s Whisper technology to create a solution that is not only effective but also fast and secure, with no data storage involved. The app is designed to be used seamlessly in everyday conversations, making it easier for bilingual users to communicate without constantly switching settings or keyboards.

While the current version of Silvia is limited to languages that use the Roman alphabet, Singh’s approach reflects a practical and thoughtful application of AI to solve real-world problems. The app, which has been approved by Apple, will be available for download at the end of the month, offering a more accurate and user-friendly experience for those who speak in a mix of languages.

In an era where AI is often associated with grand promises, Silvia stands out for its simplicity and focus on improving everyday communication. As multilingual AI tools like Silvia and Nigeria’s new multilingual large language model continue to emerge, the future of AI in language processing looks increasingly inclusive and adaptable to the needs of diverse users.

Altman hints at groundbreaking AI, reveals Project Strawberry

OpenAI is developing Project Strawberry to improve its AI models’ ability to handle long-horizon tasks, which involve planning and executing complex actions over extended periods. Sam Altman, OpenAI’s chief, hinted at this project in a cryptic social media post, sharing an image of strawberries with the caption, ‘I love summer in the garden.’ That led to speculation about the project’s potential impact on AI capabilities.

Project Strawberry, also known as Q*, aims to significantly enhance the reasoning abilities of OpenAI’s AI models. According to a recent Reuters report, some at OpenAI believe Q* could be a breakthrough in the pursuit of artificial general intelligence (AGI). The project involves innovative approaches that allow AI models to plan ahead and navigate the internet autonomously, addressing common sense issues and logical fallacies that often result in inaccurate outputs.

OpenAI has announced DevDay 2024, a global developer event series with stops in San Francisco, London, and Singapore. The focus will be on advancements in the API and developer tools, though there is speculation that OpenAI might preview its next frontier model. Recent developments in the LMsys chatbot arena, where a new model showed strong performance in math, suggest significant progress in AI capabilities.

Internal documents reveal that Project Strawberry includes a “deep-research” dataset for training and evaluating the models, although the contents remain undisclosed. The innovation is expected to enable AI to conduct research autonomously, using a computer-using agent to act based on its findings. OpenAI plans to test Strawberry’s capabilities in performing tasks typically done by software and machine learning engineers, highlighting its potential to revolutionise AI applications.

AI-powered cars in China, Intel targets automotive market

Intel is making a bold move into the AI-powered automotive industry with the launch of its first discrete GPU designed for autonomous and intelligent cars. The Intel Arc Graphics for Automotive dGPU was unveiled at an event in Shenzhen, China, and is set to be commercially deployed in early 2025. The new technology promises to revolutionise in-car AI experiences, providing drivers and manufacturers with enhanced personalisation and functionality.

The automotive market presents a new opportunity for Intel, which has struggled to keep pace with competitors like Nvidia in the AI sector. Nvidia dominates the market with its GPUs powering the majority of AI workloads, leaving Intel in need of a breakthrough. The new dGPU could be that opportunity, allowing Intel to establish a foothold in a growing industry keen on integrating AI capabilities.

Intel’s new dGPU is an open, scalable platform that builds on its existing SDV System-on-Chip. The platform supports premium AI features such as in-car assistants for navigation and entertainment. Demonstrations at the event showcased its ability to power multiple high-definition displays, voice and gesture recognition, and advanced infotainment systems. Intel’s partners, including Thunder Software Technology and Zhiphu Technologies, highlighted the potential for immersive mobile hubs and AI assistants.

The move into the automotive sector is a strategic one for Intel as it seeks to leverage the rapid technological adoption in China. The company aims to tap into an ecosystem of over 100 software companies to provide a wide range of AI-powered in-car experiences. Intel’s Vice President and General Manager of Automotive, Jack Weast, emphasised the potential of this market, citing China’s advanced development cycles and technological adoption as key factors.

California’s AI regulation bill sparks industry backlash

California has become the focal point in the debate over regulating AI as a new bill, SB 1047, advances through the state legislature. The bill, which has drawn significant opposition from AI startups and tech giants, proposes safety measures for large AI models to prevent ‘catastrophic harm,’ such as cyberattacks or mass casualties. The legislation would require developers to conduct safety tests and ensure that humans can shut down AI systems if they pose a danger.

Critics argue that the bill needs to be more specific and could stifle innovation in California. Opponents, including major companies like Meta, OpenAI, and Google, have voiced concerns about the bill’s broad and undefined requirements. They fear it could lead to legal uncertainties and discourage the public release of AI models, harming the state’s vibrant tech ecosystem.

The bill has already passed several legislative hurdles but faces strong resistance as it moves toward a final vote. While its author, Democratic state senator Scott Wiener, is open to amendments, he maintains that the bill aligns with safety standards already adopted by the industry. However, many in the tech community still need to be convinced, citing potential legal and operational challenges if the bill becomes law.

Why does this matter?

The outcome of this legislative battle could have far-reaching implications for AI regulation across the United States, as California often sets the precedent for other states. As the debate continues, the tech industry is closely watching how the state will balance innovation with the need for safety and regulation in the rapidly evolving field of AI.

UN report highlights AI’s impact on global workforce

A new report from the UN Secretary-General’s Envoy on Technology and the International Labour Organization examines the impact of AI on the global workforce. Titled ‘Mind the AI Divide: Shaping a Global Perspective on the Future of Work,’ the report delves into how AI is reshaping labour markets, altering the AI value chain, and changing the demand for skills.

The report highlights the uneven adoption of AI across different regions, which could exacerbate global inequalities if not addressed. To promote inclusive growth, the report emphasises the need for strategies that support AI development in areas that need to catch up in technology adoption.

Strengthening international cooperation and building national capacities are identified as key steps toward creating a more equitable and resilient AI ecosystem. The report advocates for global collaboration to ensure that AI benefits are widely shared, fostering global opportunities for prosperity and human advancement.

Siemens expands management board to enhance AI integration

Siemens is set to expand its management board from five to seven members in a strategic move to accelerate its transition towards a technology-focused enterprise. The company announced that Peter Koerte, head of strategy and technology, and Veronika Bienert, head of Siemens Financial Services, will join the board on 1 October. This expansion is seen as a response to Siemens’ significant scale, with 324,000 employees and €80 billion in revenue, necessitating a larger leadership team to drive growth.

Supervisory Board Chairman Jim Hagemann Snabe, who has held his position since 2018, will seek reelection for another two-year term in February. Snabe emphasised that Siemens prioritises AI as a key focus, aiming to leverage AI for industrial applications to stay ahead of competitors. Peter Koerte, in his new role on the management board, is expected to be instrumental in this AI-driven strategy.

In addition to the new appointments, Siemens confirmed that Cedrik Neike, head of the Digital Industries division, will have his contract extended by five years. The company also hinted at future leadership changes, noting that Veronika Bienert could be a potential successor to the current CFO, Ralf Thomas, who plans to retire in 2026. However, Snabe stated it was too early for a formal discussion on this succession.

Musk’s X faces legal action over unauthorised data use in AI training

A consumer group has filed a complaint against Elon Musk’s social media platform X, alleging violations of the General Data Protection Regulation (GDPR) in using user data to train its AI tool, Grok. The complaint, submitted by lawyer Marco Scialdone on behalf of Euroconsumers and Altroconsumo, was lodged with the Irish Data Protection Commission (DPC).

X users recently discovered that their data was being used to train Grok, an AI chatbot that Musk’s company xAI developed, without explicit consent. The complaint accuses X of failing to clearly explain its data usage practices, collecting excessive data, and possibly mishandling sensitive information. Scialdone has called on the DPC to order X to stop using personal data for AI training and to ensure compliance with GDPR. Violations of these regulations can lead to fines as high as 4% of a company’s worldwide annual revenue, making non-compliance potentially very expensive for X.

The complaint also highlights issues with X’s communication regarding its data processing practices. According to Scialdone, X’s privacy policy does not transparently outline the legal basis for using personal data for AI training. The policy mentions using data on a ‘legitimate interest’ basis, which allows data processing if it serves a valid purpose without infringing on users’ rights. However, Scialdone argued that this information is not easily accessible to users. He also stressed that such legal actions would lead to a consistent regulatory approach across different platforms, preventing disparities in user treatment and market inequalities.

Why does this matter?

Musk’s approach to compliance with the EU privacy laws has been controversial, raising concerns about X’s adherence to regulatory standards. The DPC’s actions signal a potential end to Musk’s relatively unchecked run on GDPR oversight since the filed suit marks the third major tech company facing such allegations, following similar complaints against Meta and LinkedIn. Recently, X has also faced regulatory challenges in the Netherlands and scrutiny under the EU’s Digital Services Act, which could lead to even steeper penalties for non-compliance.

AI writing tools in Apple’s iOS 18.1 come with content restrictions

Apple has introduced its new AI-powered Writing Tools in the iOS 18.1 developer beta, providing users with the ability to reformat or rewrite text using Apple’s AI models. However, the tool warns that AI-generated suggestions might not be of the highest quality when dealing with certain sensitive topics. Users will see a message alerting them when attempting to rewrite text containing swear words, references to drugs, or mentions of violence, indicating the tool wasn’t designed for such content.

Despite the warnings, the AI tool still offers suggestions even when encountering restricted words or phrases. During testing, replacing a swear word with a milder term resulted in the same AI-generated suggestion. Apple has been asked to clarify which specific topics the writing tools are not trained to handle, but no further details have been provided yet.

Apple appears to be exercising caution to avoid controversy by limiting the AI’s handling of certain terms and topics. The Writing Tools feature is not intended to generate new content from scratch but rather to assist in rewriting existing text. Apple’s cautious approach aligns with its history, as seen when it finally allowed autocorrect to learn swear words in iOS 17 after years of restrictions.

The release of these AI features also coincides with Apple’s alignment with OpenAI for future AI innovations and its support for the Biden administration’s AI safety initiatives. These steps underscore Apple’s commitment to responsible AI development while providing advanced tools to its users.

Zuckerberg apologises for Facebook photo error involving Trump

Former President Donald Trump revealed that Meta CEO Mark Zuckerberg apologised to him after Facebook mistakenly labelled a photo of Trump as misinformation. The photo, which showed Trump raising a fist after surviving an assassination attempt at a rally in Butler, Pennsylvania, was initially flagged by Meta’s AI system. Trump disclosed the apology during an interview with FOX Business’ Maria Bartiromo, stating that Zuckerberg called him twice to express regret and praise his response to the event.

Meta Vice President of Global Policy Joel Kaplan clarified that the error occurred due to similarities between a doctored image and the real photo, leading to an incorrect fact-check label. Kaplan explained that the AI system misapplied the label due to subtle differences between the two images. Meta’s spokesperson Andy Stone reiterated that Zuckerberg has not endorsed any candidate for the 2024 presidential election and that the labelling error was not due to bias.

The incident highlights ongoing challenges for Meta as it navigates content moderation and political neutrality, especially ahead of the 2024 United States election. Additionally, the assassination attempt on Trump has sparked various online conspiracy theories. Meta’s AI chatbot faced criticism for initially refusing to answer questions about the shooting, a decision attributed to the overwhelming influx of information during breaking news events. Google’s AI chatbot Gemini similarly refused to address the incident, sticking to its policy of avoiding responses on political figures and elections.

Both Meta and Google have faced scrutiny over their handling of politically sensitive content. Meta’s recent efforts to shift away from politics and focus on other areas, combined with Google’s cautious approach to AI responses, reflect the tech giants’ strategies to manage the complex dynamics of information dissemination and political neutrality in an increasingly charged environment.

Intel falls behind in AI race, Nvidia and AMD surge

Once a leader in the computer chip industry, Intel, has faced significant challenges adapting to the AI era. Seven years ago, Intel had an opportunity to invest in OpenAI, a then-emerging non-profit focused on generative AI. Discussions between the two companies explored various investment options, including a $1 billion stake and hardware manufacturing deals, but Intel ultimately decided against it.

CEO Bob Swan doubted the near-term market viability of generative AI models, leading to the decision not to invest. OpenAI sought the investment to reduce reliance on Nvidia chips and develop its own infrastructure, but Intel’s data centre unit was unwilling to produce hardware at cost. Since then, OpenAI has launched ChatGPT and achieved a valuation of around $80 billion, marking a significant missed opportunity for Intel.

The decision was part of a series of strategic missteps that saw Intel fall behind in the AI chip market. The company’s stock recently plummeted, marking its worst trading day since 1974 and valuing it at under $100 billion for the first time in three decades. In contrast, rivals like Nvidia and AMD have surged ahead, capturing significant market share with AI-optimised GPU technology.

Despite recent efforts to catch up, such as developing the Gaudi AI chip and acquiring startups like Nervana Systems and Habana Labs, Intel still lags behind competitors. The company’s previous focus on CPUs over GPUs, which are better suited for AI tasks, has left it struggling to compete in the rapidly growing AI market.