Miles Brundage exits OpenAI to focus on independent research

Miles Brundage, a veteran policy researcher and senior adviser at OpenAI, has left the company to pursue independent work in the nonprofit sector. In a post on X and an essay, Brundage explained his decision, stating he believes he can have a greater impact on AI policy and research outside of the industry, where he will have more freedom to publish his findings.

Brundage joined OpenAI in 2018 and played a key role in the company’s policy research, particularly in the responsible deployment of AI systems like ChatGPT. His departure signals ongoing shifts within OpenAI, with the company reorganising its economic research and AGI readiness teams. While OpenAI expressed support for Brundage’s decision, it did not specify who will take over his responsibilities.

Brundage’s exit is part of a broader trend of high-profile departures from OpenAI, with several key figures, including CTO Mira Murati and chief research officer Bob McGrew, having recently resigned. The departures reflect internal disagreements about the company’s direction, especially as it faces criticism over balancing commercial ambitions with AI safety.

AI cheating scandal at University sparks concern

Hannah, a university student, admits to using AI to complete an essay when overwhelmed by deadlines and personal illness. Struggling with COVID and intense academic pressure, she turned to AI for help but later faced an academic misconduct hearing. Though cleared due to insufficient evidence, Hannah warns others about the risks of relying on AI tools for dishonest purposes.

Universities now grapple with teaching students to use AI responsibly while preventing misuse. A lecturer discovered Hannah’s essay had been generated by AI using detection software, reflecting the complexities of monitoring academic integrity. Some institutions prohibit AI unless explicitly approved, while others allow limited use for grammar checks or structural guidance if properly cited.

Lecturers note that AI-generated content often lacks coherence and critical thinking. Dr Sarah Lieberman from Canterbury Christchurch University explains how AI-produced essays can be spotted easily, describing them as lacking the human touch. Nonetheless, she acknowledges AI’s potential benefits, such as generating ideas or guiding students in their research, if used appropriately.

Students hold mixed views on AI in education. Some embrace it as a helpful tool for structuring work or exam preparation, while others resist it, preferring to rely on their efforts. A Department for Education spokesperson emphasises the need for universities to find a balance between maintaining academic integrity and preparing students for the workplace by equipping them with essential AI skills.

AI tool decodes pig emotions for farmers

European scientists have developed an AI algorithm that can interpret pig sounds to help farmers monitor their animals’ emotions, potentially improving pig welfare. The tool, created by researchers from universities across several European countries, analyses grunts, oinks, and squeals to identify whether pigs are experiencing positive or negative emotions. This could give farmers new insights beyond just monitoring physical health, as emotions are key to animal welfare but are often overlooked on farms.

The study found that pigs on free-range or organic farms produce fewer stress-related calls compared to conventionally raised pigs, suggesting a link between environment and emotional well-being. The AI algorithm could eventually be used in an app to alert farmers when pigs are stressed or uncomfortable, allowing for better management. Short grunts are associated with positive feelings, while longer grunts and high-pitched squeals often indicate stress or discomfort.

Researchers believe that once fully developed, this technology could not only benefit animal welfare but also help consumers make more informed choices about the farms they support.

ChatGPT comes to Apple’s new intelligence features

Apple has introduced ChatGPT integration with the release of iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2, allowing developers to explore new features tied to its Apple Intelligence system. The integration enables ChatGPT to enhance Siri’s knowledge and power new writing tools, along with other features like image generation and cleanup tools.

Users who opt into both Apple Intelligence and ChatGPT will be able to leverage OpenAI’s models without needing a separate ChatGPT account, though non-premium users will face limitations on the number of queries. Siri can now call on ChatGPT for certain tasks, such as generating recipes or helping with travel plans, making the virtual assistant more versatile.

Apple Intelligence also includes ‘Compose,’ which lets users generate text based on prompts in supported apps. Alongside this, users can experiment with OpenAI’s image generation or create customised emojis through Apple’s Genmoji tool, offering a more creative and intuitive user experience.

Nvidia resolves flaw with Blackwell AI chips

Nvidia’s CEO Jensen Huang announced that a design flaw impacting the company’s Blackwell AI chips has been resolved with assistance from TSMC, its long-term Taiwanese manufacturing partner. The production glitch had delayed chip shipments, initially set for the second quarter, affecting clients such as Google, Microsoft, and Meta.

Huang acknowledged Nvidia was solely responsible for the flaw, which had reduced production yields. He dismissed reports of tensions with TSMC, crediting the manufacturer for helping restore manufacturing efficiency. The chips, which involve the integration of seven different components, are now expected to ship in the fourth quarter.

Blackwell chips, Nvidia’s latest innovation, feature two silicon squares fused into a single unit, delivering speeds 30 times faster than previous models. They are designed for advanced tasks, including AI-driven responses from chatbots. Shares in Nvidia fell by 2% in early trading following news of the delay.

Huang made the announcement during a visit to Denmark, where he introduced Gefion, a new supercomputer featuring 1,528 GPUs. Built in partnership with the Novo Nordisk Foundation and Denmark’s Export and Investment Fund, Gefion is expected to enhance high-performance computing in the region.

OpenAI’s rumoured Orion model sparks excitement and speculation

OpenAI’s latest AI model, code-named Orion, is reportedly set to debut by December, with limited access initially granted to a few corporate partners, according to sources. Unlike previous releases available broadly on ChatGPT, Orion will first be shared with select companies, including key partner Microsoft. Engineers at Microsoft are preparing to deploy Orion on Azure by November, suggesting early access could be imminent.

Although Orion is seen as the successor to GPT-4, OpenAI has yet to confirm if the model will officially carry the GPT-5 designation. Publicly, OpenAI has downplayed the reports, with CEO Sam Altman dismissing them as “fake news.” An OpenAI spokesperson later clarified that the company has “no plans to release a model code-named Orion this year,” but they confirmed a commitment to releasing new technology.

Sources indicate that Orion could be up to 100 times more powerful than GPT-4 and separate from OpenAI’s o1 reasoning model, launched in September. Orion’s development has likely involved synthetic data generated by o1, referred to internally as “Strawberry.” OpenAI celebrated completing Orion’s training last month, which coincides with a cryptic post by Altman hinting at the model’s arrival, mentioning his excitement for “winter constellations.”

Orion is expected to advance OpenAI’s goal of creating a model capable of artificial general intelligence (AGI), a significant leap from current large language models. The prospect of Orion has drawn speculation, both for its potential capabilities and its selective release strategy, signalling OpenAI’s commitment to carefully refining its technology for high-level applications.

Presight and Colombia’s Ministry of Science partner to advance AI and smart city technology

Presight and Colombia’s Ministry of Science, Technology, and Innovation have forged a significant partnership by signing a Memorandum of Understanding (MoU) in Abu Dhabi. The collaboration primarily focuses on advancing research and development in AI, data analytics, and innovation, particularly within emerging smart cities, energy transition, and climate action technologies.

To foster interaction among institutions in both regions, the partnership plans to organise seminars and conferences and establish mechanisms for technology transfer, thereby accelerating the adoption of AI and big data in Colombia. Consequently, this strategic alliance aligns with Colombia’s ambitions to enhance operational efficiency in smart cities while advancing its bioeconomy goals.

Furthermore, it represents a key step in Presight’s international expansion, reflecting Colombia’s desire to become a significant player in Latin America’s tech landscape. Leaders from both organisations have expressed their enthusiasm for this partnership.

It has been described as a milestone for advancing research and innovation in Colombia and the broader Latin American region. Additionally, the importance of the MoU in strengthening ties with the UAE has been emphasised, along with a commitment to ethical and sustainable AI initiatives. Together, Presight and Colombia aim to harness the potential of AI and big data to address pressing global challenges, thereby positioning themselves as leaders in innovation and technology in their respective regions.

Tech competition between US and China to escalate

The outcome of the US presidential election will not change the course of the tech conflict with China. Both Republican Donald Trump and Vice President Kamala Harris are expected to intensify measures aimed at limiting China’s access to American technology and resources, although their strategies will differ.

Harris is likely to adopt a focused, multilateral approach, building on Biden’s tactics by working with allies to curb the flow of advanced technology to China. In contrast, Trump’s strategy could include sweeping measures, such as expanding tariffs and aggressively enforcing export controls, possibly escalating tensions with allies who resist the US lead.

Both candidates aim to curb China’s technological advancement and its military capabilities. Harris has pledged to ensure the US remains at the forefront of the global technology race, while Trump continues to advocate for higher tariffs and tough restrictions, including denying China access to essential components like AI chips.

China has already responded to recent US actions by imposing restrictions on exports of critical materials, such as graphite and rare earths. Experts warn that the US should exercise caution, as some industries remain reliant on Chinese resources. The tech war will likely see new fronts, including connected devices, as the conflict deepens under the next administration.

Google unveils open-source watermark for AI text

Google has released SynthID Text, a watermarking tool designed to help developers identify AI-generated content. Available for free on platforms like Hugging Face and Google’s Responsible GenAI Toolkit, this open-source technology aims to improve transparency around AI-written text. It works by embedding subtle patterns into the token distribution of text generated by AI models without affecting the quality or speed of the output.

SynthID Text has been integrated with Google’s Gemini models since earlier this year. While it can detect text that has been paraphrased or modified, the tool does have limitations, particularly with shorter text, factual responses, and content translated from other languages. Google acknowledges that its watermarking technique may struggle with these formats but emphasises the tool’s overall benefits.

As the demand for AI-generated content grows, so does the need for reliable detection methods. Countries like China are already mandating watermarking of AI-produced material, and similar regulations are being considered in US, California. The urgency is clear, with predictions that AI-generated content could dominate 90% of online text by 2026, creating new challenges in combating misinformation and fraud.

TSMC notifies US of Huawei AI chip concerns

Taiwan Semiconductor Manufacturing Company (TSMC) has notified the US government about a potential breach of export controls involving Huawei. TSMC suspects that the Chinese tech company may be attempting to work around US restrictions that ban the chipmaker from producing advanced AI chips for Huawei, a target of American trade curbs since 2020.

The US imposed these controls to limit China’s access to high-end semiconductors, crucial for developing military technologies. While TSMC claims it hasn’t supplied Huawei since mid-2020, a recent customer order for a chip similar to Huawei’s Ascend 910B has raised concerns. The AI chip in question is designed for training large language models, a key area of competition in the tech rivalry between Washington and Beijing.

TSMC promptly reported the situation to the US Commerce Department, although no investigation has been launched against the company. The US and Huawei have yet to comment on the matter.