Goodman Group has emerged as a standout performer in Australia’s real estate sector this year, with its stock soaring 45.8%, marking its strongest run since 2006. The surge is driven by a boom in AI, which has sparked frenzied demand for data centres. Global tech giants like Amazon, Microsoft, and Meta have poured billions into expanding their data centre capacity, fueling growth for developers like Goodman.
At the end of September, 42% of Goodman’s A$12.8 billion ($7.96 billion) development portfolio was dedicated to data centres, a jump from 37% last year. Analysts like John Lockton of Sandstone Insights see this focus as a key strength, noting the company’s access to land with power supply, a critical factor for future data-centre projects.
Despite the optimism, some caution remains. Analysts warn that soaring valuations in the data-centre sector could cool investor enthusiasm. Goodman’s high stock prices and concerns over risks like obsolescence and increased competition raise questions about long-term returns. Nonetheless, with robust demand for AI infrastructure, Goodman’s pipeline and strategic positioning keep it well-poised for continued growth.
AI is transforming education for students with disabilities, offering tools that level the playing field. From reading assistance to speech and language tools, AI is enabling students to overcome learning barriers. For 14-year-old Makenzie Gilkison, who has dyslexia, AI-powered assistive technology has been life-changing, allowing her to excel academically and keep pace with her peers.
Schools are increasingly adopting AI for personalised learning, balancing its benefits with ethical considerations. Tools like chatbots and text-to-speech programs enhance accessibility while raising concerns about over-reliance and the potential for misuse. Experts emphasise that AI should support, not replace, learning.
Research and development are advancing rapidly, addressing challenges like children’s handwriting and speech impediments. Initiatives such as the National AI Institute for Exceptional Education aim to refine these tools, while educators work to ensure students and teachers are equipped to harness their potential effectively.
Chinese AI firm DeepSeek has unveiled DeepSeek V3, a groundbreaking open-source model designed for a range of text-based tasks. Released under a permissive licence, the model supports coding, translations, essay writing, and email drafting, offering developers the freedom to modify and deploy it commercially.
In internal benchmarks, DeepSeek V3 outperformed major competitors, including Meta’s Llama 3.1 and OpenAI’s GPT-4o, especially in coding contests and integration tests. The model boasts an impressive 671 billion parameters, significantly exceeding the size of many rivals, which often correlates with higher performance.
DeepSeek-V3!
60 tokens/second (3x faster than V2!) API compatibility intact Fully open-source models & papers 671B MoE parameters 37B activated parameters Trained on 14.8T high-quality tokens
DeepSeek V3 was trained on a dataset of 14.8 trillion tokens and built using a data centre powered by Nvidia H800 GPUs. Remarkably, the model was developed in just two months for a reported $5.5 million—far less than comparable systems. However, its size and resource demands make it less practical without high-end hardware.
Regulatory limitations influence the model’s responses, particularly on politically sensitive topics. DeepSeek, backed by High-Flyer Capital Management, continues to push for advancements in AI, striving to compete with leading global firms despite restrictions on access to cutting-edge GPUs.
New research by The Guardian reveals that ChatGPT Search, OpenAI’s recently launched AI-powered search tool, can be misled into generating false or overly positive summaries. By embedding hidden text in web pages, researchers demonstrated that the AI could ignore negative reviews or even produce malicious code.
The feature, designed to streamline browsing by summarising content such as product reviews, is susceptible to hidden text attacks—a well-known vulnerability in large language models. While this issue has been studied before, this marks the first time such manipulation has been proven on a live AI search tool.
OpenAI did not comment on this specific case but stated it employs measures to block malicious websites and is working to improve its defences. Experts note that competitors like Google, with more experience in search technology, have developed stronger safeguards against similar threats.
AI startups specialising in sales development representatives (SDRs) are experiencing rapid growth as businesses embrace new technologies to streamline outreach. These startups, leveraging large language models (LLMs) and voice technology, automate tasks like crafting personalised emails and placing calls to potential customers. This sector has seen an unprecedented surge, with multiple companies achieving notable success in a short span, according to Shardul Shah of Index Ventures. However, investors remain cautious about whether this trend will yield lasting results or fade once the novelty wears off.
The appeal of AI SDRs is particularly strong among small and medium-sized businesses, which find it easier to experiment with these tools. Arjun Pillai, founder of Docket, attributes the popularity to declining reply rates for traditional cold emails, prompting businesses to explore AI-driven solutions. Startups like Regie.ai, AiSDR, and 11x.ai, as well as incumbents like ZoomInfo, are vying for market share, boasting impressive revenue growth. Yet, as Tomasz Tunguz of Theory Ventures noted, some businesses report that while AI SDRs generate substantial leads, they don’t necessarily translate into higher sales, highlighting a gap in effectively integrating AI into sales strategies.
Despite the enthusiasm, the rise of AI SDRs faces significant challenges. Industry leaders such as Salesforce and HubSpot, which control vast customer data, could introduce similar AI features, potentially outpacing smaller startups. Investors also point to cautionary tales like Jasper, a copywriting AI startup that stumbled after the launch of ChatGPT, emphasising the uncertainty surrounding the longevity of AI adoption in sales. For now, the potential of AI SDRs to revolutionise sales processes is undeniable, but their ability to sustain growth and deliver tangible results remains to be seen.
AI is set to redefine retail in 2025, offering highly personalised shopping experiences. AI assistants are expected to manage up to 20% of eCommerce tasks, including product recommendations and customer service. Industry leaders like Citi and Google Cloud predict more intuitive and efficient retail processes but warn about data privacy concerns. Enhanced demand forecasting could also reduce inventory costs by 10%.
Experts highlight potential challenges, such as algorithmic biases and AI-driven fraud. Regulators worldwide are preparing new policies to ensure secure and fair AI implementation as businesses invest heavily in AI capabilities.
AI will not only handle routine tasks but also revolutionise customer interactions. With advanced behavioural insights and multimodal capabilities, businesses are poised to gain deeper understanding and engagement with their customers. However, widespread industry transformation is expected to take several years as companies address scalability and trust in AI decision-making.
The Indian government has launched several initiatives to strengthen consumer protection, focusing on leveraging technology and enhancing online safety. Key developments include the introduction of the AI-enabled National Consumer Helpline, the e-Maap Portal, and the Jago Grahak Jago mobile application, all designed to expedite the resolution of consumer complaints and empower citizens to make informed choices.
The government of India also highlighted the significant progress made through the three-tier consumer court system, resolving thousands of disputes this year. In the realm of e-commerce, major platforms like Reliance Retail, Tata Sons, and Zomato pledged to enhance online shopping security, reflecting the government’s commitment to ensuring consumer confidence in the digital marketplace.
The e-Daakhil Portal has been expanded nationwide, achieving 100% adoption in states like Karnataka, Punjab, and Rajasthan, making it easier for consumers to file complaints online. The Consumer Protection Authority (CCPA) is also drafting new guidelines to regulate surrogate advertising and has already taken action against 13 companies for non-compliance with existing rules.
The importance of these initiatives was underscored at the National Consumer Day event, where key officials, including Minister of State for Consumer Affairs B L Verma and TRAI Chairman Anil Kumar Lahoti, were present. The event highlighted the government’s ongoing efforts to foster a safer and more transparent consumer environment, especially in the rapidly evolving digital landscape.
AI became a defining feature of the 2024 Paris Olympics. Athletes benefited from AI-driven tools like chatbots for cybersecurity and systems offering 360-degree performance replays. AI also enhanced event safety with software monitoring crowd dynamics and abandoned objects, paving the way for future global events.
Outside the Olympics, AI was integrated into consumer technology. Car manufacturers such as Volkswagen and XPeng introduced AI-assisted features, transforming vehicles into adaptive companions. Volkswagen’s ChatGPT integration enhanced in-car assistance, while XPeng’s AI-defined car promised autonomous decision-making.
Flying taxis generated excitement but failed to soar as anticipated. Despite setbacks, companies like Volocopter and Hyundai showcased designs, while Joby secured a UK license, with commercial flights expected by 2025. Tesla unveiled the Cybercab, relying solely on AI for navigation, with a 2026 market launch planned.
Smartphones and smart glasses underwent significant innovation. Bendable phones by Motorola and Lenovo offered new flexibility, and Samsung’s toughened foldable displays impressed. Meta’s neural-interface glasses broke ground but raised privacy concerns, demonstrating the balance between progress and ethics in technology.
A satirical video imagining Spain’s political rivals embracing the festive spirit has captured attention nationwide. The AI-generated clip, created by the collective United Unknown, portrays unlikely moments of reconciliation, such as Prime Minister Pedro Sánchez and conservative leader Alberto Núñez Feijóo sharing a warm hug. Former King Juan Carlos and Queen Sofía are also shown exchanging a kiss, despite their well-documented estrangement.
The video, titled The Magic of Christmas and set to the song Rockin’ Around the Christmas Tree, uses deepfake technology to depict other striking scenes. Far-right Vox leader Santiago Abascal and Catalan separatist Gabriel Rufián are seen laughing together, while Podemos founders Íñigo Errejón and Pablo Iglesias appear to have resolved their differences, chuckling and embracing. Madrid’s conservative leader Isabel Díaz Ayuso and Labour Minister Yolanda Díaz also feature, exchanging smiles and gestures of goodwill.
Since its release on X on 20 December, the video has been viewed over 3.4 million times and received widespread acclaim for its creative ingenuity. Gabriel Rufián, one of the depicted politicians, even retweeted the post. However, not all responses have been positive, with some raising concerns about the growing realism of AI-generated content and its potential to blur the line between reality and fiction.
United Unknown describes itself as a ‘visual guerrilla’ collective, known for satirical deepfakes often targeting Spain’s political scene. While the video has been celebrated as a humorous take on political differences, it also sparks a broader conversation about the implications of AI technology in modern media.
OpenAI’s ChatGPT search tool is under scrutiny after a Guardian investigation revealed vulnerabilities to manipulation and malicious content. Hidden text on websites can alter AI responses, raising concerns over the tool’s reliability. The search feature, currently available to premium users, could misrepresent products or services by summarising planted positive content, even when negative reviews exist.
Cybersecurity researcher Jacob Larsen warned that the AI system in its current form might enable deceptive practices. Tests revealed how hidden prompts on webpages influence ChatGPT to deliver biased reviews. The same mechanism could be exploited to distribute malicious code, as highlighted in a recent cryptocurrency scam where the tool inadvertently shared credential-stealing instructions.
Experts emphasised that while combining search with AI models like ChatGPT offers potential, it also increases risks. Karsten Nohl, a scientist at SR Labs, likened such AI tools to a ‘co-pilot’ requiring oversight. Misjudgments by the technology could amplify risks, particularly as it lacks the ability to critically evaluate sources.
OpenAI acknowledges the possibility of errors, cautioning users to verify information. However, broader implications, such as how these vulnerabilities could impact website practices, remain unclear. Hidden text, while traditionally penalised by search engines like Google, may find new life in manipulating AI-based tools, posing challenges for OpenAI in securing the system.