KPMG has committed $100 million over the next four years to enhance its enterprise AI services through collaboration with Google Cloud. The investment will focus on developing AI tools, training employees, and leveraging Google’s technology to scale AI solutions for clients.
Steve Chase, KPMG’s vice chair for AI and innovation, highlighted that enterprise demand for AI has surged, with many businesses planning substantial investments in the technology. KPMG’s partnership with Google aligns with a broader strategy to expand AI services across multiple cloud platforms, including a prior $2 billion collaboration with Microsoft.
Google Cloud‘s president of revenue, Matt Renner, noted the rapid growth in cloud services, emphasising the synergy between cloud providers and consulting firms as a key driver for future industry expansion.
A Massachusetts judge upheld disciplinary measures against a high school senior accused of cheating with an AI tool. The Hingham High School student’s parents sought to erase his record and raise his history grade, but the court sided with the school. Officials determined the student violated academic integrity by copying AI-generated text, including fabricated citations.
The student faced penalties including detention and temporary exclusion from the National Honor Society. He later gained readmission. His parents argued that unclear rules on AI usage led to confusion, claiming the school violated his constitutional rights. However, the court found the plagiarism policy sufficient.
Judge Paul Levenson acknowledged AI’s challenges in education but said the evidence showed misuse. The student and his partner had copied AI-generated content indiscriminately, bypassing proper review. The judge declined to order immediate changes to the student’s record or grade.
The case remains unresolved as the parents plan to pursue further legal action. School representatives praised the decision, describing it as accurate and lawful. The ruling highlights the growing complexities of generative AI in academic settings.
Numenta, supported by the Gates Foundation, has introduced an open-source AI model designed to cut down on energy and data use compared to existing AI systems. This innovation reflects the company’s unique take on how the brain functions, inspired by co-founder Jeff Hawkins’ expertise in neuroscience. Hawkins, known for creating the Palm Pilot, has channeled his understanding of human cognition into this new AI approach.
Unlike conventional AI systems that require vast data and electricity for training, Numenta’s model mimics the brain’s ability to process information in real time. It can adapt dynamically, like a child learning through exploration. The technology is designed to improve robotics, writing tools, and more, emphasising flexibility and efficiency.
To encourage broader adoption, Numenta has made its technology freely available, following a similar open-source trend seen with tech giants like Meta. However, CEO Subutai Ahmad emphasised the importance of closely monitoring its use, given concerns over potential misuse as the technology evolves.
The Irish Data Protection Commission (DPC) is awaiting guidance from the European Data Protection Board (EDPB) on handling AI-related privacy issues under the EU’s General Data Protection Regulation (GDPR). Data protection commissioners Des Hogan and Dale Sunderland emphasised the need for clarity, particularly on whether personal data continues to exist within AI training models. The EDPB is expected to provide its opinion before the end of the year, helping harmonise regulatory approaches across Europe.
The DPC has been at the forefront of addressing AI and privacy concerns, especially as companies like Meta, Google, and X (formerly Twitter) use EU users’ data to train large language models. As part of this growing responsibility, the Irish authority is also preparing for a potential role in overseeing national compliance with the EU’s upcoming AI Act, following the country’s November elections.
The regulatory landscape has faced pushback from Big Tech companies, with some arguing that stringent regulations could hinder innovation. Despite this, Hogan and Sunderland stressed the DPC’s commitment to enforcing GDPR compliance, citing recent legal actions, including a €310 million fine on LinkedIn for data misuse. With two more significant decisions expected by the end of the year, the DPC remains a key player in shaping data privacy in the age of AI.
Nvidia reported a staggering $19B in net income last quarter but faced questions about sustaining its rapid growth amid shifts in AI development methods. Analysts questioned CEO Jensen Huang on how Nvidia’s position might evolve with trends like ‘test-time scaling,’ a method that enhances AI responses by increasing computing power during inference, the phase when AI generates answers.
Huang described test-time scaling as a groundbreaking development and emphasised Nvidia’s readiness to support it. He noted that while most of the company’s focus remains on pretraining AI models, the growing emphasis on inference could transform the AI landscape. Nvidia’s dominance in pretraining has propelled its stock up 180% this year, but competition in AI inference is heating up, with startups like Groq and Cerebras offering alternative chip solutions.
Despite concerns about diminishing returns from traditional AI scaling, Huang remains optimistic, asserting that foundational AI development continues to advance. He reiterated Nvidia’s advantage as the largest AI inference platform globally, citing the company’s scale and reliability as critical factors in maintaining its edge.
New Lantern, a startup founded by engineer Shiva Suri, has raised $19M in Series A funding led by Benchmark. Inspired by observing his mother’s work as a radiologist, Suri created the platform to address inefficiencies in the field. New Lantern combines two core radiology tools, PACS, which stores medical images and reporting software, into a streamlined system powered by AI.
Unlike other AI solutions that focus on replacing radiologists, New Lantern enhances productivity by automating repetitive tasks like measurements and report generation. This approach allows radiologists to focus on analysing scans, which Benchmark’s Eric Vishria praised for doubling efficiency. The startup’s software is already being used by some radiology practices, although specifics remain undisclosed.
Suri envisions New Lantern as the next major evolution in radiology, akin to the industry’s shift from physical film to digital PACS. With plans to fully modernise the field, including cloud-based data storage, the company aims to disrupt entrenched players like GE Healthcare and Microsoft’s Nuance. For Suri, the personal stakes are high as his mom is an avid supporter of the platform she inspired.
Actor and filmmaker Ben Affleck has weighed in on the ongoing debate over AI in the entertainment industry, arguing that AI poses little immediate threat to actors and screenwriters. Speaking to CNBC, Affleck stated that while AI can replicate certain styles, it lacks the creative depth required to craft meaningful narratives or performances, likening it to a poor substitute for human ingenuity.
Affleck, co-founder of a film studio with fellow actor Matt Damon, expressed optimism about AI’s role in Hollywood, suggesting it might even generate new opportunities for creative professionals. However, he raised concerns about its potential impact on the visual effects industry, which could face significant disruptions as AI technologies advance.
Strikes by Hollywood unions last year highlighted fears that AI could replace creative talent. Affleck remains sceptical of such a scenario, maintaining that storytelling and human performance remain uniquely human domains that AI is unlikely to master soon.
Security experts are urging caution when using AI chatbots like ChatGPT and Grok for interpreting medical scans or sharing private health information. Recent trends show users uploading X-rays, MRIs, and other sensitive data to these platforms, but such actions can pose significant privacy risks. Uploaded medical images may become part of training datasets for AI models, leaving personal information exposed to misuse.
Unlike healthcare apps covered by laws like HIPAA, many AI chatbots lack strict data protection safeguards. Companies offering these services may use the data to improve their algorithms, but it’s often unclear who has access or how the data will be used. This lack of transparency has raised alarms among privacy advocates.
X-owner Elon Musk recently encouraged users to upload medical imagery to Grok, his platform’s AI chatbot, citing its potential to evolve into a reliable diagnostic tool. However, Musk acknowledged that Grok is still in its early stages, and critics warn that sharing such data online could have lasting consequences.
Google has announced a $20 million fund, with an additional $2 million in cloud credits, to support researchers using AI to tackle complex scientific challenges. The initiative, unveiled by Google DeepMind CEO Demis Hassabis at the AI for Science Forum in London, is part of Google’s broader strategy to foster innovation and collaboration with academic and non-profit organisations globally.
The funding will prioritise interdisciplinary projects addressing challenges in fields such as rare disease research, experimental biology, sustainability, and materials science. Google plans to distribute the funding to approximately 15 organisations by 2026, ensuring each grant is substantial enough to drive impactful breakthroughs. The programme reflects Google’s aim to position itself as a key partner in advancing science through AI, building on successes like AlphaFold, which recently earned DeepMind leaders a Nobel Prize in Chemistry.
The move aligns with a growing trend among Big Tech firms investing heavily in AI-driven research. Amazon’s AWS recently committed $110 million to similar grants, underscoring the race to attract leading scientists and researchers into their ecosystems. Hassabis expressed hope that the initiative would inspire greater collaboration between the private and public sectors and further demonstrate AI’s transformative potential in science.
California-based AI startup Enfabrica has raised $115M in a funding round to tackle one of the field’s most pressing challenges, enabling vast networks of AI chips to work seamlessly at scale. The company, founded by former engineers from Broadcom and Alphabet, plans to release its new networking chip early next year. This chip aims to enhance efficiency by addressing bottlenecks in how AI computing chips interact with networks, a problem that slows down data processing and wastes resources.
The startup claims its technology can scale AI networks to connect up to 500,000 chips, significantly surpassing the current limit of around 100,000. This breakthrough could speed up the training of larger AI models, reducing time and costs associated with unreliable or inaccurate outcomes. “The attributes of the network, like bandwidth and resiliency, are critical for scaling AI efficiently,” said Enfabrica CEO Rochan Sankar.
Investors in the funding round included Spark Capital, Maverick Silicon, and corporate backers like Arm Holdings and Samsung Ventures. Nvidia, an industry leader in AI chips, also participated, signaling strong support for Enfabrica’s mission to optimise AI infrastructure.