Altman shares first glimpse of GPT-5 via Pantheon screenshot

OpenAI CEO Sam Altman shared a screenshot on X showing GPT-5 in action. The post casually endorsed the animated sci-fi series Pantheon, a cult tech favourite exploring general AI.

When asked if GPT-5 also recommends the show, Altman replied with a screenshot: ‘turns out yes’. It marked one of the earliest public glimpses of the new model, hinting at expanded capabilities.

GPT-5 is expected to outperform its predecessors, with a larger context window, multimodal abilities, and more agentic task handling. The screenshot also shows that some quirks remain, such as its fondness for the em dash.

The model identified Pantheon as having a 100% critic rating on Rotten Tomatoes and described it as ‘cerebral, emotional, and philosophically intense’. Business Insider verified the score and tone of the reviews.

OpenAI faces mounting pressure to keep pace with rivals like Google DeepMind, Meta, xAI, and Anthropic. Public teasers such as this one suggest GPT-5 will soon make a broader debut.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools like Grok 4 may make developers obsolete, Musk suggests

Elon Musk has predicted a major shift in software development, claiming that AI is turning coding from a job into a recreational activity. The xAI CEO believes AI has removed much of the ‘drudgery’ from writing software.

Replying to OpenAI President Greg Brockman, Musk compared the future of coding to painting. He suggested that software creation will be more creative and expressive, no longer requiring professional expertise for functional outcomes.

Musk, a co-founder of OpenAI, left the organisation after a public dispute with the current CEO, Sam Altman. He later launched xAI, which now operates the Grok chatbot as a rival to ChatGPT, Gemini and Claude.

Generative AI firms are accelerating efforts in automated coding. OpenAI recently launched Codex to create a cloud-based software engineering agent, while Microsoft released GitHub Spark AI to generate apps from natural language.

xAI’s latest offering, Grok 4, supports over 20 programming languages and integrates with code editors. It enables developers to write, debug, and understand code using commands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI is the next iPhone moment, says Apple CEO Tim Cook

Any remaining doubts about Apple’s commitment to AI have been addressed directly by its CEO, Tim Cook.

At an all-hands meeting on Apple’s Cupertino campus, Cook told employees that the AI revolution is as big as the internet, smartphones, cloud computing, and apps.

According to Bloomberg’s Power On newsletter, Cook clarified that Apple sees AI as an imperative. ‘Apple must do this,’ he said, describing the opportunity as ‘ours to grab’.

Despite Apple unveiling its AI suite, Apple Intelligence, only in June, well after competitors, Cook remains optimistic about Apple’s ability to take the lead.

‘We’ve rarely been first,’ he told staff. ‘There was a PC before the Mac; a smartphone before the iPhone; many tablets before the iPad; an MP3 player before the iPod.’

Cook stressed that Apple had redefined these categories and suggested a similar future for AI, declaring, ‘This is how I feel about AI.’

Cook also outlined concrete steps the company is taking. Around 40% of the 12,000 hires made last year were allocated to research and development, with much of the focus on AI.

According to Bloomberg, Apple is also reportedly developing a new cloud-computing chip, code-named Baltra, designed to support AI features. In a recent interview with CNBC, Cook stated that Apple is open to acquisitions that could accelerate its progress in the AI sector.

Apple is not alone in its intense focus on AI. Rival firms are also increasing expectations and pressure. Sergey Brin, the former Google CEO who has returned to the company, told employees that 60-hour in-office work weeks may be necessary to win the AI race.

Reports of burnout and extreme workloads are becoming more frequent across leading AI firms. Former OpenAI engineer Calvin French-Owen recently described the company’s high-pressure and secretive culture.

French-Owen noted that the environment had become so intense that leadership offered the entire staff a week off to recover, according to Wired.

AI has become the next major battleground in big tech, with companies ramping up investment and reshaping internal structures to secure dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI pulls searchable chats from ChatGPT

OpenAI has removed a feature that allowed users to make their ChatGPT conversations publicly searchable, following backlash over accidental exposure of sensitive content.

Dane Stuckey, OpenAI’s CISO, confirmed the rollback on Thursday, describing it as a short-lived experiment meant to help users find helpful conversations. However, he acknowledged that the feature posed privacy risks.

‘Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to,’ Stuckey wrote in a post on X. He added that OpenAI is working to remove any indexed content from search engines.

The move came swiftly after Fast Company and privacy advocate Luiza Jarovsky reported that some shared conversations were appearing in Google search results.

Jarovsky posted examples on X, noting that even though the chats were anonymised, users were unknowingly revealing personal experiences, including harassment and mental health struggles.

To activate the feature, users had to tick a box allowing their chat to be discoverable. While the process required active steps, critics warned that some users might opt in without fully understanding the consequences. Stuckey said the rollback will be complete by Friday morning.

The incident adds to growing concerns around AI and user privacy, particularly as conversational platforms like ChatGPT become more embedded in everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

As Meta AI grows smarter on its own, critics warn of regulatory gaps

While OpenAI’s ChatGPT and Google’s Gemini dominate headlines, Meta’s AI is making quieter, but arguably more unsettling, progress. According to CEO Mark Zuckerberg, Meta’s AI is advancing rapidly and, crucially, learning to improve without external input.

In a blog post titled ‘Personal Superintelligence’, Zuckerberg claimed that Meta AI is becoming increasingly powerful through self-directed development. While he described current gains as modest, he emphasised that the trend is both real and significant.

Zuckerberg framed this as part of a broader mission to build AI that acts as a ‘personal superintelligence’, a tool that empowers individuals and becomes widely accessible. However, critics argue this narrative masks a deeper concern: AI systems that can evolve autonomously, outside human guidance or scrutiny.

The concept of self-improving AI is not new. Researchers have previously built systems capable of learning from other models or user interactions. What’s different now is the speed, scale and opacity of these developments, particularly within big tech companies operating with minimal public oversight.

The progress comes amid weak regulation. While governments like the Biden administration have issued AI action plans, experts say they lack the strength to keep up. Meanwhile, AI is rapidly spreading across everyday services, from healthcare and education to biometric verification.

Recent examples include Google’s behavioural age-estimation tools for teens, illustrating how AI is already making high-stakes decisions. As AI systems become more capable, questions arise: How much data will they access? Who controls them? And can the public meaningfully influence their design?

Zuckerberg struck an optimistic tone, framing Meta’s AI as democratic and empowering. However, that may obscure the risks of AI outpacing oversight, as some tech leaders warn of existential threats while others focus on commercial gains.

The lack of transparency worsens the problem. If Meta’s AI is already showing signs of self-improvement, are similar developments happening in other frontier models, such as GPT or Gemini? Without independent oversight, the public has no clear way to know—and even less ability to intervene.

Until enforceable global regulations are in place, society is left to trust that private firms will self-regulate, even as they compete in a high-stakes race for dominance. That’s a risky gamble when the technology itself is changing faster than we can respond.

As Meta AI evolves with little fanfare, the silence may be more ominous than reassuring. AI’s future may arrive before we are prepared to manage its consequences, and by then, it might be too late to shape it on our terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Nscale to build an AI super hub in Norway

OpenAI has revealed its first European data centre project in partnership with British startup Nscale, selecting Norway as the location for what is being called ‘Stargate Norway’.

The initiative mirrors the company’s ambitious $500 billion US ‘Stargate’ infrastructure plan and reflects Europe’s growing demand for large-scale AI computing capacity.

Nscale will lead the development of a $1 billion AI gigafactory in Norway, with engineering firm Aker matching the investment. These advanced data centres are designed to meet the heavy processing requirements of cutting-edge AI models.

OpenAI expects the facility to deliver 230MW of computing power by the end of 2026, making it a significant strategic foothold for the company on the continent.

Sam Altman, CEO of OpenAI, stated that Europe needs significantly more computing to unlock AI’s full potential for researchers, startups, and developers. He said Stargate Norway will serve as a cornerstone for driving innovation and economic growth in the region.

Nscale confirmed that Norway’s AI ecosystem will receive priority access to the facility, while remaining capacity will be offered to users across the UK, Nordics and Northern Europe.

The data centre will support 100,000 of NVIDIA’s most advanced GPUs, with long-term plans to scale as demand grows.

The move follows broader European efforts to strengthen AI infrastructure, with the UK and France pushing for major regulatory and funding reforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI annual revenue doubles to 12 billion

OpenAI has doubled its revenue in the first seven months of 2025, reaching an annualised run rate of about $12 billion.

Surging demand for both consumer ChatGPT products and enterprise-level AI services is the main driver for this rapid growth.

Weekly active users of ChatGPT have soared to approximately 700 million, reflecting the platform’s expanding global reach and wide penetration. 

At the same time, costs have risen sharply, with cash burn projected around $8 billion in 2025, up from previous estimates.

OpenAI is preparing to release its next-generation AI model GPT‑5 in early August, underscoring its focus on innovation to maintain leadership in the AI market.

Despite growing competition from rival firms like DeepSeek, OpenAI remains confident that its technological edge and expanding product portfolio will sustain momentum.

Financial projections suggest potential revenue of $11 billion this year, with continued expansion into enterprise services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets smarter with Study Mode to support active learning

OpenAI has launched a new Study Mode in ChatGPT to help users engage more deeply with learning. Rather than simply providing answers, the feature guides users through concepts and problem-solving step-by-step. It is designed to support critical thinking and improve long-term understanding.

The company developed the feature with educators, scientists, and pedagogy experts. They aimed to ensure the AI supports active learning and doesn’t just deliver quick fixes. The result is a mode that encourages curiosity, reflection, and metacognitive development.

According to OpenAI, Study Mode allows users to approach subjects more critically and thoroughly. It breaks down complex ideas, asks questions, and helps manage cognitive load during study. Instead of spoon-feeding, the AI acts more like a tutor than a search engine.

The shift reflects a broader trend in educational technology — away from passive learning tools. Many students turn to AI for homework help, but educators have warned of over-reliance. Study Mode attempts to strike a balance by promoting engagement over shortcuts.

For instance, rather than giving the complete solution to a maths problem, Study Mode might ask: ‘What formula might apply here?’ or ‘How could you simplify this expression first?’ This approach nudges students to participate in the process and build fundamental problem-solving skills.

It also adapts to different learning needs. In science, it might walk through hypotheses and reasoning. It may help analyse a passage or structure an essay in the humanities. Prompting users to think aloud mirrors effective tutoring strategies.

OpenAI says feedback from teachers helped shape the feature’s tone and pacing. One key aim was to avoid overwhelming learners with too much information at once. Instead, Study Mode introduces concepts incrementally, supporting better retention and understanding.

The company also consulted cognitive scientists to align with best practices in memory and comprehension. However, this includes encouraging users to reflect on their learning and why specific steps matter. Such strategies are known to improve both academic performance and self-directed learning.

While the feature is part of ChatGPT, it can be toggled on or off. Users can activate Study Mode when tackling a tricky topic or exploring new material. They can then switch to normal responses for broader queries or summarised answers.

Educators have expressed cautious optimism about the update. Some see it as a tool supporting homework, revision, or assessment preparation. However, they also warn that no AI can replace direct teaching or personalised guidance.

Tools like this could be valuable in under-resourced settings or for independent learners.

Study Mode’s interactive style may help level the playing field for students without regular academic support. It also gives parents and tutors a new way to guide learners without doing the work for them.

Earlier efforts included teacher guides and classroom use cases. However, Study Mode marks a more direct push to reshape how students use AI in learning.

It positions ChatGPT not as a cheat sheet, but as a co-pilot for intellectual growth.

Looking ahead, OpenAI says it plans to iterate based on user feedback and teacher insights. Future updates may include subject-specific prompts, progress tracking, or integrations with educational platforms. The goal is to build a tool that adapts to learning styles without compromising depth or rigour.

As AI continues to reshape education, tools like Study Mode may help answer a central question: Can technology support genuine understanding, instead of just faster answers? With Study Mode, OpenAI believes the answer is yes, if used wisely.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT Agent brings autonomous task handling to OpenAI users

OpenAI has launched the ChatGPT Agent, a feature that transforms ChatGPT from a conversational tool into a proactive digital assistant capable of performing complex, real-world tasks.

By activating ‘agent mode,’ users can instruct ChatGPT to handle activities such as booking restaurant reservations, ordering groceries, managing emails and creating presentations.

The Agent operates within a virtual browser environment, allowing it to interact with websites, fill out forms, and execute multi-step tasks autonomously.

However, this advancement builds upon OpenAI’s previous tool, Operator, which enabled AI-driven task execution. However, the ChatGPT Agent offers enhanced capabilities, including integration with third-party services like Gmail and Google Drive, allowing it to manage emails and documents seamlessly.

Users can monitor the Agent’s actions in real-time and intervene when necessary, particularly during tasks involving sensitive information.

While the ChatGPT Agent offers significant convenience, it also questions data privacy and security. OpenAI has implemented safety measures, such as requiring explicit user consent for sensitive actions and training the Agent to refuse risky or malicious requests.

Despite these precautions, concerns persist regarding handling personal information and access to third-party services. Users must review the Agent’s permissions and settings to ensure their data remains secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta forms AI powerhouse by appointing Shengjia Zhao as chief scientist

Meta has appointed former OpenAI researcher Shengjia Zhao as Chief Scientist of its newly formed AI division, Meta Superintelligence Labs (MSL).

Zhao, known for his pivotal role in developing ChatGPT, GPT-4, and OpenAI’s first reasoning model, o1, will lead MSL’s research agenda under Alexandr Wang, the former CEO of Scale AI.

Mark Zuckerberg confirmed Zhao’s appointment, saying he had been leading scientific efforts from the start and co-founded the lab.

Meta has aggressively recruited top AI talent to build out MSL, including senior researchers from OpenAI, DeepMind, Apple, Anthropic, and its FAIR lab. Zhao’s presence helps balance the leadership team, as Wang lacks a formal research background.

Meta has reportedly offered massive compensation packages to lure experts, with Zuckerberg even contacting candidates personally and hosting them at his Lake Tahoe estate. MSL will focus on frontier AI, especially reasoning models, in which Meta currently trails competitors.

By 2026, MSL will gain access to Meta’s massive 1-gigawatt Prometheus cloud cluster in Ohio, designed to power large-scale AI training.

The investment and Meta’s parallel FAIR lab, led by Yann LeCun, signal the company’s multi-pronged strategy to catch up with OpenAI and Google in advanced AI research.

The collaboration dynamics between MSL, FAIR, and Meta’s generative AI unit remain unclear, but the company now boasts one of the strongest AI research teams in the industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!