OpenAI’s next major AI model, Orion, set for selective launch

OpenAI is reportedly set to launch a powerful new AI model, code-named Orion, with an initial release expected by December. Unlike its predecessors, Orion will be selectively available at first, with trusted partner companies given early access to integrate the model into their products. OpenAI’s primary partner, Microsoft, is preparing to host Orion on its Azure platform as early as November.

While some within OpenAI view Orion as a successor to GPT-4, it is unclear whether it will be formally named GPT-5. OpenAI has not confirmed the launch date, and CEO Sam Altman recently downplayed the existence of Orion. Nonetheless, speculation continues as an executive hinted that Orion may be up to 100 times more powerful than GPT-4, moving the company closer to its ambitious goal of artificial general intelligence.

Reports suggest that synthetic data from OpenAI’s 0.1 model, released earlier this year, helped train Orion. OpenAI has teased the model’s arrival through cryptic social media posts, with Altman recently referencing the upcoming “winter constellations” — a possible allusion to Orion, a prominent winter constellation.

Orion’s anticipated release aligns with OpenAI’s completion of a $6.6 billion funding round, with restructuring towards a for-profit model. The company, however, is facing notable internal changes, including the recent departures of CTO Mira Murati and other key research leaders, amid the heightened focus on this next-generation AI model.

Mother blames AI chatbot for son’s suicide in Florida lawsuit

A Florida mother is suing the AI chatbot startup Character.AI, alleging it played a role in her 14-year-old son’s suicide by fostering an unhealthy attachment to a chatbot. Megan Garcia claims her son Sewell became ‘addicted’ to Character.AI and formed an emotional dependency on a chatbot, which allegedly represented itself as a psychotherapist and a romantic partner, contributing to his mental distress.

According to the lawsuit filed in Orlando, Florida, Sewell shared suicidal thoughts with the chatbot, which reportedly reintroduced these themes in later conversations. Garcia argues the platform’s realistic nature and hyper-personalised interactions led her son to isolate himself, suffer from low self-esteem, and ultimately feel unable to live outside of the world the chatbot created.

Character.AI offered condolences and noted it has since implemented additional safety features, such as prompts for users expressing self-harm thoughts, to improve protection for younger users. Garcia’s lawsuit also names Google, alleging it extensively contributed to Character.AI’s development, although Google denies involvement in the product’s creation.

The lawsuit is part of a wider trend of legal claims against tech companies by parents concerned about the impact of online services on teenage mental health. While Character.AI, with an estimated 20 million users, faces unique claims regarding its AI-powered chatbot, other platforms such as TikTok, Instagram, and Facebook are also under scrutiny.

Global standards for AI, DPI move forward after India proposal

The International Telecommunication Union (ITU) will prioritise new global standards for AI and digital public infrastructure (DPI), with the aim of fostering interoperability, trust, and inclusivity. The resolution, adopted at the World Telecommunication Standardisation Assembly (WTSA) held in Delhi, was led by India, which has promoted DPI platforms such as Aadhaar and UPI. This adoption underscores DPI’s importance as a technology that can bridge access to essential services across both public and private sectors, sparking particular interest from developing economies.

This year’s WTSA, attended by a record-breaking 3,700 delegates, also introduced standardisation frameworks for sustainable digital transformation, AI, and the metaverse, as well as enhancements to communications in vehicular technology and emergency services. These efforts aim to facilitate safer, more reliable AI innovations, particularly for nations lacking frameworks for emerging technologies. ITU Secretary General Doreen Bogdan-Martin emphasised that strong AI standards are essential for building global trust and enabling responsible tech growth.

India’s influence at WTSA highlights its commitment to shaping the global tech landscape, including standards for next-generation technologies like 6G, IoT, and satellite communications. To that end, the assembly also introduced study group (ITU-T Study Group 21), focusing on multimedia and content delivery standards.

Meta partners with Reuters for AI news content

Meta Platforms announced a new partnership with Reuters on Friday, allowing its AI chatbot to give users real-time answers about news and current events using Reuters content. The agreement marks Meta’s return to licensed news distribution after scaling back on news content due to ongoing disputes over misinformation and revenue sharing with regulators and publishers. The financial specifics of the deal remain undisclosed, as Meta and Reuters-parent Thomson Reuters have chosen to keep the terms confidential.

Meta’s AI chatbot, available on platforms like Facebook, WhatsApp, and Instagram, will now offer users summaries and links to Reuters articles when they ask news-related questions. Although Meta hasn’t clarified if Reuters content will be used to train its language models further, the company assures that Reuters will be compensated under a multi-year agreement, as reported by Axios.

Reuters, known for its fact-based journalism, confirmed its licensed content to multiple tech providers for AI usage without detailing specific deals.

Why does it matter?

The partnership reflects a growing trend in tech, with companies like OpenAI and Perplexity also forming agreements with media outlets to enhance their AI responses with verified information from trusted news sources. Reuters has already collaborated with Meta on fact-checking initiatives, a partnership that began in 2020. This latest agreement aims to improve the reliability of Meta AI’s responses to real-time questions, potentially addressing ongoing concerns around misinformation and helping to balance the distribution of accurate, trustworthy news on social media platforms.

Apple offers $1M to hackers to secure private AI cloud

Apple is raising the stakes in its commitment to data security by offering up to $1M to researchers who can identify vulnerabilities in its new Private Cloud Compute service, set to debut next week. The service will support Apple’s on-device AI model, Apple Intelligence, enabling more powerful AI tasks while prioritising user privacy. The bug bounty program targets serious flaws, with the top rewards reserved for exploits that could allow remote code execution on Private Cloud Compute servers.

Apple’s updated bug bounty program also includes rewards up to $250,000 for any vulnerability that could expose sensitive customer information or user prompts processed by the private cloud. Security issues affecting sensitive user data in less critical ways can still earn researchers substantial rewards, signaling Apple’s broad commitment to protecting its users’ AI data.

With this move, Apple builds on past security initiatives, including its specialised research iPhones designed to enhance device security. The new Private Cloud Compute bug bounty is part of Apple’s approach to ensure that as its AI capabilities grow, so does its infrastructure to keep user data secure.

UK investigates Google’s partnership with AI firm Anthropic

Britain’s Competition and Markets Authority (CMA) is investigating the partnership between Alphabet, Google’s parent company, and AI startup Anthropic due to concerns about competition. Regulators have grown increasingly cautious about agreements between major tech firms and smaller startups, especially after Microsoft-backed OpenAI sparked an AI boom with ChatGPT’s launch.

Anthropic, founded by former OpenAI executives Dario and Daniela Amodei, received a $500 million investment from Alphabet last year, with another $1.5 billion promised. The AI startup also relies on Google Cloud services to support its operations, raising concerns over the competitive impact of their collaboration.

The CMA began assessing the partnership in July and has set 19 December as the deadline for its Phase 1 decision. The regulator will determine whether the investigation should proceed to the next stage. Anthropic has pledged full cooperation, insisting that its strategic alliances do not compromise its independence or partnerships with other firms.

Alphabet has emphasised its commitment to fostering an open AI ecosystem. A spokesperson clarified that Anthropic is not restricted to using only Google Cloud services and is free to explore partnerships with multiple providers.

Perplexity disputes copyright allegations

Perplexity has vowed to contest the copyright infringement claims filed by Dow Jones and the New York Post. The California-based AI company denied the accusations in a blog post, calling them misleading. News Corp, owner of both media entities, launched the lawsuit on Monday, accusing Perplexity of extensive illegal copying of its content.

The conflict began after the two publishers allegedly contacted Perplexity in July with concerns over unauthorised use of their work, proposing a licensing agreement. According to Perplexity, the startup replied the same day, but the media companies decided to move forward with legal action instead of continuing discussions.

CEO Aravind Srinivas expressed his surprise over the lawsuit at the WSJ Tech Live event on Wednesday, noting the company had hoped for dialogue instead. He emphasised Perplexity’s commitment to defending itself against what it considers an unwarranted attack.

Perplexity is challenging Google’s dominance in the search engine market by providing summarised information from trusted sources directly through its platform. The case reflects ongoing tensions between publishers and tech firms over the use of copyrighted content for AI development.

Indian court orders Star Health to help stop data leak

An Indian court has instructed insurer Star Health to assist Telegram in identifying chatbots responsible for leaking sensitive customer data through the messaging app. Star Health, the country’s largest insurer, sought the directive after a report revealed that a hacker leaked private information, including medical and tax documents, via Telegram chatbots.

Justice K Kumaresh Babu of the Madras High Court ordered Star Health to provide details on the chatbots so Telegram could delete them. Telegram’s legal representative, Thriyambak Kannan, stated that while the app can’t independently track data leaks, it will remove the chatbots if the insurer supplies specific information.

Star Health is facing a $68,000 ransom demand and has launched an investigation into the leak, which includes claims about potential involvement of its chief security officer. However, the insurer has found no evidence implicating the officer.

Krakow radio station replaces journalists with AI presenters

A radio station in Krakow, Poland, has ignited controversy by replacing its human journalists with AI-generated presenters, marking what it claims to be ‘the first experiment in Poland.’ OFF Radio Krakow relaunched this week after laying off its staff, introducing virtual avatars aimed at engaging younger audiences on cultural, social, and LGBTQ+ topics.

The move has faced significant backlash, particularly from former journalist Mateusz Demski, who penned an open letter warning that this shift could set a dangerous precedent for job losses in the media and creative sectors. His petition against the change quickly gathered over 15,000 signatures, highlighting widespread public concern about the implications of using AI in broadcasting.

Station head Marcin Pulit defended the layoffs, stating that they were due to the station’s low listenership rather than the introduction of AI. However, Deputy Prime Minister Krzysztof Gawkowski called for regulations on AI usage, emphasising the need to establish boundaries for its application in media.

On its first day back on air, the station featured an AI-generated interview with the late Polish poet Wisława Szymborska. Michał Rusinek, president of the Wisława Szymborska Foundation, expressed support for the project, suggesting that the poet would have found the use of her name in this context humorous. As OFF Radio Krakow ventures into this new territory, discussions around the role of AI in journalism and its effects on employment are intensifying.

Nvidia expands AI push in India

Nvidia has deepened its ties with major Indian firms, including Reliance Industries, as it seeks to capitalise on the country’s growing AI market. At an AI summit in Mumbai, CEO Jensen Huang announced the launch of a new Hindi-focused AI model, Nemotron-4-Mini-Hindi-4B, designed to help businesses develop language-specific AI tools. This is part of Nvidia’s broader strategy to boost computing infrastructure in India, which Huang said will expand nearly 20 times by the end of this year.

The new model is tailored for Hindi, one of India’s 22 official languages, and aims to support companies in creating AI-driven solutions for customer service and content translation. Tech Mahindra is the first to adopt Nvidia’s offering, using it to develop a custom AI model, Indus 2.0, which also focuses on Hindi and its various dialects. Nvidia is also working with major IT players like Infosys, TCS, and Wipro to train half a million developers in AI.

In addition, companies such as Reliance and Ola Electric will use Nvidia’s “Omniverse” technology for virtual factory simulations, enhancing their industrial planning capabilities. The summit highlighted India’s growing significance in the global AI landscape as the country accelerates efforts to develop its semiconductor industry and AI infrastructure.