The UAE’s energy giant ADNOC is pioneering the use of highly autonomous agentic AI in the energy sector through a partnership with G42, Microsoft, and AIQ, as announced by CEO Sultan Al Jaber at an industry event in Abu Dhabi. This move is part of a broader UAE strategy to reduce reliance on oil, with support from G42, which secured a $1.5 billion investment from Microsoft to fuel the nation’s tech industry diversification.
Agentic AI, viewed as the future of artificial intelligence, allows systems to operate independently and make proactive decisions. According to Jaber, this advanced AI will significantly enhance operations by analysing vast amounts of data, reducing seismic survey times from months to days, and improving production forecasts by up to 90%.
The UAE’s government is investing billions in AI, including regional language-specific chatbots, positioning the Gulf state to remain economically influential as global demand for oil wanes.
Leading tech giants are racing to expand their AI infrastructure, with companies like Microsoft, Meta, and Amazon dedicating billions to meet rising demand. However, the heavy spending on data centres and computing power is sparking concern among investors who are eager for quicker returns. Big Tech’s significant capital investments come with mounting costs, threatening profitability and raising questions about how quickly these ventures will yield results.
Despite exceeding recent earnings forecasts, Big Tech stocks dropped on Thursday, underlining the pressure they face to balance AI expansion with shareholder expectations. Microsoft and Meta reported increased spending in their latest quarters, yet their shares fell, with Microsoft dropping 6% and Meta 4%. Amazon’s shares saw a brief dip before recovering on news of a strong third-quarter performance. Analysts point to a challenging road ahead as these firms juggle AI ambitions with market demands for near-term gains.
The challenges extend to capacity issues, with firms like Microsoft struggling to keep up with demand due to data centre constraints. Meanwhile, Meta forecasts that its AI-related expenses will increase significantly next year, and chip manufacturers like Nvidia and AMD are racing to fulfil orders. This supply bottleneck highlights the complex task of scaling up AI services, adding a layer of unpredictability to Big Tech’s efforts.
Despite short-term risks, companies remain committed to AI. Amazon CEO Andy Jassy described AI as a “once-in-a-lifetime” opportunity, while Meta’s Mark Zuckerberg likened today’s investment climate to the early days of cloud computing. As firms continue to ramp up infrastructure spending, they are counting on long-term returns, hoping to transform initial scepticism into eventual success.
Nvidia is seeking antitrust approval from the European Union for its planned acquisition of Israeli AI startup Run:ai valued at approximately $700 million. The European Commission has raised concerns that the merger could harm competition in the markets where both companies operate, prompting increased scrutiny of tech giants acquiring startups. This move reflects a broader regulatory trend aimed at preventing potential monopolistic practices in the tech sector.
Although the acquisition does not meet the EU’s turnover threshold for automatic review, it was flagged by Italy’s competition agency, which requested the EU to investigate further. The Commission has accepted this request, indicating that the transaction could significantly impact competition across the European Economic Area.
In response to the regulatory review, Nvidia expressed its readiness to cooperate and answer any questions regarding the acquisition. The company is committed to ensuring that AI technologies remain accessible across various platforms, emphasising its role as a leader in the chip industry, particularly for AI applications like ChatGPT.
Praxis, a forward-thinking tech company, has secured a remarkable $525 million investment to create a groundbreaking city on the Mediterranean coast that merges cryptocurrency and AI. This ambitious project, aimed at crafting a tech-driven society, envisions a seamless blend of nature and advanced technology, where electric vehicles and AI-driven systems enhance urban life.
Founded in 2019 by Dryden Brown and Charlie Callinan, Praxis seeks to establish a utopian city that champions innovation, minimal governance, and a libertarian lifestyle. With plans to use cryptocurrency as the primary currency, the city promises to attract top tech talent and entrepreneurs looking for a fresh start free from traditional constraints. Despite the project’s romantic nature, it has already garnered the interest of over 2,000 prospective residents, with a waiting list of 50,000.
Collaborating with renowned firm Zaha Hadid Architects, Praxis aims to design a city that harmoniously fuses futuristic and traditional styles, ensuring adaptability for future growth. While some critics question the project’s feasibility, the support from prominent investors like Peter Thiel and Balaji Srinivasan underlines the potential for this vision to reshape urban living. With operations projected to begin around 2026, Praxis is set to host its first event in the Dominican Republic to gather leaders and innovators focused on the future of digital sovereignty.
Election officials across the US are intensifying efforts to counter deepfake robocalls as the 2024 election nears, worried about AI-driven disinformation campaigns. Unlike visible manipulated images or videos, fake audio calls targeting voters are harder to detect, leaving officials bracing for the impact on public trust. A recent incident in New Hampshire, where a robocall falsely claimed to be from President Biden urging people to skip voting, highlighted how disruptive these AI-generated calls can be.
Election leaders have developed low-tech methods to counter this high-tech threat, such as unique code words to verify identities in sensitive phone interactions. In states like Colorado, officials have been trained to respond quickly to suspicious calls, including hanging up and verifying information directly with their offices. Colorado’s Secretary of State Jena Griswold and other leaders are urging election directors to rely on trusted contacts to avoid being misled by convincing deepfake messages.
To counter misinformation, some states are also enlisting local leaders and community figures to help debunk false claims. Officials in states like Minnesota and Illinois have collaborated with media outlets and launched public awareness campaigns, warning voters about potential disinformation in the lead-up to the election. These campaigns, broadcasted widely on television and radio, aim to preempt misinformation by providing accurate, timely information.
While no confirmed cases show that robocalls have swayed voters, election officials regard the potential impact as severe. Local efforts to counteract these messages, such as public statements and community outreach, serve as a reminder of the new and evolving risks AI technology brings to election security.
China’s People’s Liberation Army (PLA) has adapted Meta’s open-source AI model, Llama, to create a military-focused tool named ChatBIT. Developed by researchers from PLA-linked institutions, including the Academy of Military Science, ChatBIT leverages an earlier version of Llama, fine-tuned for military decision-making and intelligence processing tasks. The tool reportedly performs better than some alternative AI models, though it falls short of OpenAI’s ChatGPT-4.
Meta, which supports open innovation, has restrictions against military uses of its models. However, the open-source nature of Llama limits Meta’s ability to prevent unauthorised adaptations, such as ChatBIT. In response, Meta affirmed its commitment to ethical AI use and noted the need for US innovation to stay competitive as China intensifies its AI research investments.
China’s approach reflects a broader trend, as its institutions reportedly employ Western AI technologies for areas like airborne warfare and domestic security. With increasing US scrutiny over the national security implications of open-source AI, the Biden administration has moved to regulate AI’s development, balancing its potential benefits with growing risks of misuse.
Coframe, an AI startup focused on optimising websites and marketing, announced it has raised $9.3 million in seed funding. The funding round was co-led by Khosla Ventures and NFDG, the AI fund launched by former GitHub CEO Nat Friedman and ex-Apple executive Daniel Gross. Coframe’s platform uses generative AI to automatically test and refine website content, visuals, and code, enhancing personalisation and boosting user engagement for clients.
CEO Josh Payne noted that Coframe’s recent trial with a major international firm showed impressive results, with campaigns increasing click-through rates by an average of 42%, while some segments saw a 352% improvement. Coframe has also collaborated with OpenAI to develop a specialised AI model that generates custom user interface code, ensuring on-brand and visually consistent website elements.
Currently in a limited testing phase, Coframe is working closely with growth and marketing teams to fine-tune its platform. The company aims to redefine how businesses design user experiences by tailoring website interfaces based on users’ profiles and intent.
Meta Platforms exceeded third-quarter profit and revenue estimates, reporting a profit of $6.03 per share, compared to the projected $5.25. Revenue reached $40.59 billion, just ahead of analysts’ forecasts. However, the company warned of increased infrastructure expenses tied to its AI ambitions, prompting a 2.9% dip in after-hours trading.
The company is navigating heavy spending on AI infrastructure to support new technologies, setting it apart from cloud service providers who typically profit more directly from similar investments. Meta’s expenses for the quarter totalled $23.2 billion, with capital expenditure at $9.2 billion. While it adjusted its annual expense forecast to $96-98 billion, it foresees a rise in depreciation and operating costs due to its expanding data centre fleet.
Meta’s core ad business remains essential to covering its AI investments, and analysts believe holiday ad spending could bolster the company’s earnings further. In the third quarter, Meta’s daily active users across its app family grew 5% to 3.29 billion, while its Reality Labs division saw losses of $4.4 billion, slightly better than expected.
Toyota and Nippon Telegraph and Telephone (NTT) plan to invest 500 billion yen ($3.27 billion) by 2030 to create an AI-driven platform to reduce traffic accidents. Announced in a joint statement, the Japanese automaker and telecom giant aims to launch the platform by 2028, using extensive data to support driver-assistance technology. This project, initiated amid rising pressure on Japanese automakers to compete in the autonomous driving space, is expected to enhance safety features such as improved visibility in urban areas and smoother expressway merging.
The companies intend the platform to benefit not only their own operations but also government and industry partners, setting a long-term goal to minimise traffic accidents. Toyota and NTT, who first collaborated on 5G-connected car technology in 2017, see this project as part of a broader vision for zero-accident mobility, aiming for widespread adoption by 2030.
Toyota’s existing investments in autonomous technology include Woven by Toyota, a unit established in 2021 focused on AI mobility. Woven by Toyota is also developing the Arene automotive software platform and Woven City, a testing hub in Shizuoka. As part of these advancements, NTT and Toyota also plan to test self-driving technology as early as 2025.
The discovery of AI chatbots resembling deceased teenagers Molly Russell and Brianna Ghey on Character.ai has drawn intense backlash, with critics denouncing the platform’s moderation. Character.ai, which lets users create digital personas, faced criticism after ‘sickening’ replicas of Russell, who died by suicide at 14, and Ghey, who was murdered in 2023, appeared on the platform. The Molly Rose Foundation, a charity named in Russell’s memory, described these chatbots as a ‘reprehensible’ failure of moderation.
Concerns about the platform’s handling of sensitive content have already led to legal action in the US, where a mother is suing Character.ai after claiming her 14-year-old son took his own life following interactions with a chatbot. Character.ai insists it prioritises safety and actively moderates avatars in line with user reports and internal policies. However, after being informed of the Russell and Ghey chatbots, it removed them from the platform, saying it strives to ensure user protection but acknowledges the challenges in regulating AI.
Amidst rapid advancements in AI, experts stress the need for regulatory oversight of platforms hosting user-generated content. Andy Burrows, head of the Molly Rose Foundation, argued stronger regulation is essential to prevent similar incidents, while Brianna Ghey’s mother, Esther Ghey, highlighted the manipulation risks in unregulated digital spaces. The incident underscores the emotional and societal harm that can arise from unsupervised AI-generated personas.
The case has sparked wider debates over the responsibilities of companies like Character.ai, which states it bans impersonation and dangerous content. Despite automated tools and a growing trust and safety team, the platform faces calls for more effective safeguards. AI moderation remains an evolving field, but recent cases have underscored the pressing need to address risks linked to online platforms and user-created chatbots.