The risky rise of all-in-one AI companions

A concerning new trend is emerging: AI companions are merging with mental health tools, blurring ethical lines. Human therapists are required to maintain a professional distance. Yet AI doesn’t follow such rules; it can be both confidant and counsellor.

AI chatbots are increasingly marketed as friendly companions. At the same time, they can offer mental health advice. Combined, you get an AI friend who also becomes your emotional guide. The mix might feel comforting, but it’s not without risks.

Unlike a human therapist, AI has no ethical compass. It mimics caring responses based on patterns, not understanding. One prompt might trigger empathetic advice and best-friend energy, a murky interaction without safeguards.

The deeper issue? There’s little incentive for AI makers to stop this. Blending companionship and therapy boosts user engagement and profits. Unless laws intervene, these all-in-one bots will keep evolving.

There’s also a massive privacy cost. People confide personal feelings to these bots, often daily, for months. The data may be reviewed, stored, and reused to train future models. Your digital friend and therapist might also be your data collector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google signs groundbreaking deal to cut data centre energy use

Google has become the first major tech firm to sign formal agreements with US electric utilities to ease grid pressure. The deals come as data centres drive unprecedented energy demand, straining power infrastructure in several regions.

The company will work with Indiana Michigan Power and Tennessee Valley Authority to reduce electricity usage during peak demand. These arrangements will help divert power to general utilities when needed.

Under the agreements, Google will temporarily scale down its data centre operations, particularly those linked to energy-intensive AI and machine learning workloads.

Google described the initiative as a way to speed up data centre integration with local grids while avoiding costly infrastructure expansion. The move reflects growing concern over AI’s rising energy footprint.

Demand-response programmes, once used mainly in heavy manufacturing and crypto mining, are now being adopted by tech firms to stabilise grids in return for lower energy costs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches ‘study mode’ to curb AI-fuelled cheating

OpenAI has introduced a new ‘study mode’ to help students use AI for learning rather than cheating. The update arrives amid a spike in academic dishonesty linked to generative AI tools.

According to The Guardian, a UK survey found nearly 7,000 confirmed cases of AI misuse during the 2023–24 academic year. Universities are under pressure to adapt assessments in response.

Under the chatbot’s Tools menu, the new mode walks users through questions with step-by-step guidance, acting more like a tutor than a solution engine.

Jayna Devani, OpenAI’s international education lead, said the aim is to foster productive use of AI. ‘It’s guiding me towards an answer, rather than just giving it to me first-hand,’ she explained.

The tool can assist with homework and exam prep and even interpret uploaded images of past papers. OpenAI cautions it may still produce errors, underscoring the need for broader conversations around AI in education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s robotaxi ambitions threatened as Tesla faces a $243 million autopilot verdict

A recent court verdict has required Tesla to pay approximately $243 million in damages following a 2019 fatal crash involving an Autopilot-equipped Model S.

The Florida jury found Tesla’s driver-assistance software defective, a claim the company intends to appeal, asserting that the driver was solely responsible for the incident.

The ruling may significantly impact Tesla’s ambitions to expand its emerging robotaxi network in the US, fuelling heightened scrutiny over the safety of the company’s autonomous technology from both regulators and the public.

The timing of this legal setback is critical as Tesla is seeking regulatory approval for its robotaxi services, crucial to its market valuation and efforts to manage global competition while facing backlash against CEO Elon Musk’s political views.

Additionally, the company has recently awarded CEO Elon Musk a substantial new compensation package worth approximately $29 billion in stock options, signalling the company’s continued reliance on Musk’s leadership at a critical juncture, since the company plans transitions from a struggling auto business toward futuristic ventures like robotaxis and humanoid robots.

Tesla’s approach to autonomous driving, which relies on cameras and AI instead of more expensive technologies like lidars and radars used by competitors, has prompted it to start a limited robotaxi trial in Texas. However, its aggressive expansion plans for this service starkly contrast with the cautious rollouts by companies such as Waymo, which runs the US’s only commercial driverless robotaxi system.

The jury’s decision also complicates Tesla’s interactions with state regulators, as the company awaits approvals in multiple states, including California and Florida. While Nevada has engaged with Tesla regarding its robotaxi programme, Arizona remains indecisive.

This ruling challenges Tesla’s narrative of safety efficacy, especially since the case involved a distracted driver whose vehicle ran a stop sign and collided with a parked car, yet the Autopilot system was partially blamed.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI adoption soothes stress even as job fears rise among employees

A recent Fortune survey indicates that 61 percent of white‑collar professionals expect AI to make their roles, or even their entire teams, obsolete within 3–5 years, yet most continue to rely on AI tools daily without visible concern.

Seventy percent of respondents credit AI with boosting their creativity and productivity, and 40  percent say it has eased stress and improved work‑life balance. Despite these benefits, many admit to ‘feigning’ AI use in workplace settings, often driven by peer pressure or a lack of formal training.

Executive commentary underscores the tension: senior business leaders, including Jim Farley and Dario Amodei, predict rapid AI‑driven disruption of white‑collar roles. Some executives forecast up to 50  percent of certain job categories could be eliminated, though others argue AI may open new opportunities.

Academic studies suggest a more nuanced impact: AI is reshaping role definitions by automating routine tasks while increasing demand for complementary skills, such as ethics, teamwork, and digital fluency. Wage benefits are growing in jobs that effectively blend AI with human oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman shares first glimpse of GPT-5 via Pantheon screenshot

OpenAI CEO Sam Altman shared a screenshot on X showing GPT-5 in action. The post casually endorsed the animated sci-fi series Pantheon, a cult tech favourite exploring general AI.

When asked if GPT-5 also recommends the show, Altman replied with a screenshot: ‘turns out yes’. It marked one of the earliest public glimpses of the new model, hinting at expanded capabilities.

GPT-5 is expected to outperform its predecessors, with a larger context window, multimodal abilities, and more agentic task handling. The screenshot also shows that some quirks remain, such as its fondness for the em dash.

The model identified Pantheon as having a 100% critic rating on Rotten Tomatoes and described it as ‘cerebral, emotional, and philosophically intense’. Business Insider verified the score and tone of the reviews.

OpenAI faces mounting pressure to keep pace with rivals like Google DeepMind, Meta, xAI, and Anthropic. Public teasers such as this one suggest GPT-5 will soon make a broader debut.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s transformation of work habits, mindset and lifestyle

At Mindvalley’s AI Summit, former Google Chief Decision Scientist Cassie Kozyrkov described AI as not a substitute for human thought but a magnifier of what the human mind can produce. Rather than replacing us, AI lets us offload mundane tasks and focus on deeper cognitive and creative work.

Work structures are being transformed, not just in factories, but behind computer screens. AI now handles administrative ‘work about work,’ multitasking, scheduling, and research summarisation, lowering friction in knowledge work and enabling people to supervise agents rather than execute tasks manually.

Personal life is being reshaped, too. AI tools for finance or health, such as budgeting apps or personalised diagnostics, move decisions into data-augmented systems with faster insight and fewer human biases.

Meanwhile, creativity is co-authored via AI-generated design, music or writing, requiring humans to filter, refine and ideate beyond the algorithm.

Recognising cognitive change, AI thought leaders envision a new era where ‘blended work’ prevails: humans manage AI agents, call the shots, and wield ethical oversight, while the AI executes pipelines of repetitive or semi-intelligent tasks.

Scholars warn that this model demands new fairness, transparency, and collaboration skills.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US considers chip tracking to prevent smuggling to China

The US is exploring how to build better location-tracking into advanced chips, as part of an effort to prevent American semiconductors from ending up in China.

Michael Kratsios, a senior official behind Donald Trump’s AI strategy, confirmed that software or physical updates to chips are being considered to support traceability.

Instead of relying on external enforcement, Washington aims to work directly with the tech industry to improve monitoring of chip movements. The strategy forms part of a broader national plan to counter smuggling and maintain US dominance in cutting-edge technologies.

Beijing recently summoned Nvidia representatives to address concerns over American proposals linked to tracking features and perceived security risks in the company’s H20 chips.

Although US officials have not directly talked with Nvidia or AMD on the matter, Kratsios clarified that chip tracking is now a formal objective.

The move comes even as Trump’s team signals readiness to lift certain export restrictions to China in return for trade benefits, such as rare-earth magnet sales to the US.

Kratsios criticised China’s push to lead global AI regulation, saying countries should define their paths instead of following a centralised model. He argued that the US innovation-first approach offers a more attractive alternative.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google AI Mode raises fears over control of news

Google’s AI Mode has quietly launched in the UK, reshaping how users access news by summarising information directly in search results.

By paraphrasing content gathered across the internet, the tool offers instant answers while reducing the need to visit original news sites.

Critics argue that the technology monopolises UK information by filtering what users see, based on algorithms rather than editorial judgement. Concerns have grown over transparency, fairness and the future of independent journalism.

Publishers are not compensated for content used by AI Mode, and most users rarely click through to the sources. Newsrooms fear pressure to adapt their output to align with Google’s preferences or risk being buried online.

While AI may streamline convenience, it lacks accountability. Regulated journalism must operate under legal frameworks, whereas AI faces no such scrutiny even when errors have real consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools like Grok 4 may make developers obsolete, Musk suggests

Elon Musk has predicted a major shift in software development, claiming that AI is turning coding from a job into a recreational activity. The xAI CEO believes AI has removed much of the ‘drudgery’ from writing software.

Replying to OpenAI President Greg Brockman, Musk compared the future of coding to painting. He suggested that software creation will be more creative and expressive, no longer requiring professional expertise for functional outcomes.

Musk, a co-founder of OpenAI, left the organisation after a public dispute with the current CEO, Sam Altman. He later launched xAI, which now operates the Grok chatbot as a rival to ChatGPT, Gemini and Claude.

Generative AI firms are accelerating efforts in automated coding. OpenAI recently launched Codex to create a cloud-based software engineering agent, while Microsoft released GitHub Spark AI to generate apps from natural language.

xAI’s latest offering, Grok 4, supports over 20 programming languages and integrates with code editors. It enables developers to write, debug, and understand code using commands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!