The parents of a Massachusetts high school senior are suing Hingham High School and its district after their son received a “D” grade and detention for using AI in a social studies project. Jennifer and Dale Harris, the plaintiffs, argue that their son was unfairly punished, as there was no rule in the school’s handbook prohibiting AI use at the time. They claim the grade has impacted his eligibility for the National Honor Society and his applications to top-tier universities like Stanford and MIT.
The lawsuit, filed in Plymouth County District Court, alleges the school’s actions could cause “irreparable harm” to the student’s academic future. Jennifer Harris stated that their son’s use of AI should not be considered cheating, arguing that AI-generated content belongs to the creator. The school, however, classified it as plagiarism. The family’s lawyer, Peter Farrell, contends that there’s widespread information supporting their view that using AI isn’t plagiarism.
The Harrises are seeking to have their son’s grade changed and his academic record cleared. They emphasised that while they can’t reverse past punishments like detention, the school can still adjust his grade and confirm that he did not cheat. Hingham Public Schools has not commented on the ongoing litigation.
A London-based company, Synthesia, known for its lifelike AI video technology, is under scrutiny after its avatars were used in deepfake videos promoting authoritarian regimes. These AI-generated videos, featuring people like Mark Torres and Connor Yeates, falsely showed their likenesses endorsing the military leader of Burkina Faso, causing distress to the models involved. Despite the company’s claims of strengthened content moderation, many affected models were unaware of their image’s misuse until journalists informed them.
In 2022, actors like Torres and Yeates were hired to participate in Synthesia’s AI model shoots for corporate projects. They later discovered their avatars had been used in political propaganda, which they had not consented to. This caused emotional distress, as they feared personal and professional damage from the fake videos. Despite Synthesia’s efforts to ban accounts using its technology for such purposes, the harmful content spread online, including on platforms like Facebook.
UK-based Synthesia has expressed regret, stating it will continue to improve its processes. However, the long-term impact on the actors remains, with some questioning the lack of safeguards in the AI industry and warning of the dangers involved when likenesses are handed over to companies without adequate protections.
IBM unveiled its latest AI model, known as ‘Granite 3.0,’ on Monday, targeting businesses eager to adopt generative AI technology. The company aims to stand out from its competitors by offering these models as open-source, a different approach from firms like Microsoft, which charge clients for access to their AI models. IBM’s open-source strategy promotes accessibility and flexibility, allowing businesses to customise and integrate these models as needed.
Alongside the Granite 3.0 models, IBM provides a paid service called Watsonx, which assists companies in running these models within their data centres once they are customised. This service gives enterprises more control over their AI solutions, enabling them to tailor and optimise the models for their specific needs while maintaining privacy and data security within their infrastructure.
The Granite models are already available for commercial use through the Watsonx platform. In addition, select models from the Granite family will be accessible on Nvidia’s AI software stack, allowing businesses to incorporate these models using Nvidia’s advanced tools and resources. IBM collaborated closely with Nvidia, utilising its H100 GPUs, a leading technology in the AI chip market, to train these models. Dario Gil, IBM’s research director, highlighted that the partnership with Nvidia is central to delivering powerful and efficient AI solutions for enterprises looking to stay ahead in a rapidly evolving technological landscape.
A new report from Aspen Digital reveals that 76% of Asia’s private wealth sector has already ventured into digital assets, with an additional 18% planning future investments. Interest in digital assets has surged since 2022, when just 58% of respondents had explored the space. The survey covered 80 family offices and high-net-worth individuals and found that most manage assets ranging from $10 million to $500 million.
Among those invested, 70% have allocated less than 5% of their portfolios to digital assets, although some increased their holdings to over 10% in 2024. Interest in decentralised finance (DeFi) and blockchain applications continues to grow, with two-thirds expressing a desire to explore DeFi, while 61% are keen on AI and decentralised physical infrastructure.
The approval of spot Bitcoin ETFs, particularly in the US and Hong Kong, has driven increased demand for digital assets. The report highlighted that 53% of investors have gained exposure through funds or ETFs, with optimism remaining high as 31% predict Bitcoin could reach $100,000 by the end of 2024.
A1 Austria, Eurofiber, and Quantcom have joined forces to develop a high-speed dark-fibre network connecting Frankfurt and Vienna, marking a significant advancement in European telecommunications. Scheduled for completion in December 2025, this ambitious project aims to deliver an ultra-low-latency infrastructure essential for meeting modern telecommunications’s growing demands.
By collaborating, these three providers are not only bolstering their technical capabilities but are also ensuring that the network will support a wide array of critical applications, including cloud services, media broadcasting, AI, and machine learning (ML). Furthermore, the network’s low latency will significantly enhance connectivity for key industries across Europe, making it a vital asset for telecommunications companies, fixed network operators, and global enterprises.
Ultimately, this new fibre network is poised to serve as a critical backbone for the region’s digital ecosystem, facilitating seamless communication and data exchange. As a result, it is expected to have a substantial economic impact by connecting various industries and enabling high-performance connectivity, thereby acting as a catalyst for growth across multiple sectors.
Moreover, this initiative addresses the current demand for faster and more reliable data transfer and lays the groundwork for a more robust digital infrastructure in Europe, thereby fostering innovation and economic development in the years to come.
Bain & Company announced it is expanding its partnership with OpenAI to offer AI tools like ChatGPT to its clients. The firms previously formed a global alliance to introduce OpenAI technology to Bain’s clients, and the consultancy has now made OpenAI platforms, including ChatGPT Enterprise, available to its employees worldwide.
Bain is also setting up an OpenAI Centre of Excellence, managed by its own team, to further integrate AI solutions. The partnership will initially focus on developing custom solutions for the retail and healthcare life sciences industries, with plans for expansion into other sectors.
While Bain did not disclose financial details, around 50 employees will be dedicated to this collaboration, as reported by the Wall Street Journal.
Two independent candidates participated in an online debate on Thursday, engaging with an AI-generated version of incumbent congressman Don Beyer. The digital avatar, dubbed ‘DonBot’, was created using Beyer’s website and public materials to simulate his responses in the event, streamed on YouTube and Rumble.
Beyer, a Democrat seeking re-election, opted not to join the debate in person. His AI representation featured a robotic voice reading answers without imitating his tone. Independent challengers Bentley Hensel and David Kennedy appeared on camera, while the Republican candidate Jerry Torres did not participate. Viewership remained low, peaking at fewer than 20 viewers, and parts of DonBot’s responses were inaudible.
Hensel explained that the AI was programmed to provide unbiased answers using available public information. The debate tackled policy areas such as healthcare, gun control, and aid to Israel. When asked why voters should re-elect Beyer, the AI stated, ‘I believe that I can make a real difference in the lives of the people of Virginia’s 8th district.’
Although the event saw minimal impact, observers suggest the use of AI in politics could become more prevalent. The reliance on such technology raises concerns about transparency, especially if no regulations are introduced to guide its use in future elections.
Microsoft has announced that starting in November, customers can build autonomous AI agents through its Copilot Studio, marking a significant step in the company’s AI-driven strategy. Unlike traditional chatbots, these autonomous agents require minimal human intervention and can perform tasks like handling customer inquiries, identifying sales leads, and managing inventory. Microsoft sees these agents as crucial tools for an AI-driven world where businesses can automate routine processes more efficiently.
The move follows a growing trend among tech giants, including Salesforce, to monetise AI investments by offering companies practical, user-friendly tools. Copilot Studio will allow Microsoft customers to create these autonomous agents without needing advanced coding skills. Using a combination of Microsoft’s in-house AI models and technology from OpenAI, the software giant is poised to expand its AI offerings to a broader audience.
In addition to enabling custom-built agents, Microsoft will provide ten pre-made agents designed to handle everyday business tasks, such as managing supply chains, tracking expenses, and communicating with clients. McKinsey and Co., which had early access to these tools, successfully created an agent to streamline client inquiries, showing the potential for real-world application.
Charles Lamanna, corporate vice president at Microsoft, emphasised that Copilot would be the user interface for interacting with these AI agents, envisioning a future where every employee has a personalised AI assistant. These agents could become essential in how businesses interact with AI technology daily.
Why does it matter?
Despite Microsoft’s ambitious AI plans, there have been concerns about the pace of Copilot’s adoption. Recent surveys from Gartner indicated that many organisations have not moved beyond the pilot phase with these AI tools. While Microsoft’s stock has seen ups and downs, investor pressure continues to mount for the company to demonstrate concrete returns on its AI investments. Nonetheless, with the release of Copilot Studio, Microsoft aims to accelerate AI adoption and solidify its role in the evolving AI landscape.
US federal prosecutors are ramping up efforts to tackle the use of AI tools in creating child sexual abuse images, as they fear the technology could lead to a rise in illegal content. The Justice Department has already pursued two cases this year against individuals accused of using generative AI to produce explicit images of minors. James Silver, chief of the Department’s Computer Crime and Intellectual Property Section, anticipates more cases, cautioning against the normalisation of AI-generated abuse material.
Child safety advocates and prosecutors worry that AI systems can alter ordinary photos of children to produce abusive content, making it more challenging to identify and protect actual victims. The National Center for Missing and Exploited Children reports approximately 450 cases each month involving AI-generated abuse. While this number is small compared to the millions of online child exploitation reports received, it represents a concerning trend in the misuse of technology.
The legal framework is still evolving regarding cases involving AI-generated abuse, particularly when identifiable children are not depicted. Prosecutors are resorting to obscenity charges when traditional child pornography laws do not apply. This is evident in the case of Steven Anderegg, accused of using Stable Diffusion to create explicit images. Similarly, US Army soldier Seth Herrera faces child pornography charges for allegedly using AI chatbots to alter innocent photos into abusive content. Both defendants have pleaded not guilty.
Nonprofit groups like Thorn and All Tech Is Human are working with major tech companies, including Google, Amazon, Meta, OpenAI, and Stability AI, to prevent AI models from generating abusive content and to monitor their platforms. Thorn’s vice president, Rebecca Portnoff, emphasised that the issue is not just a future risk but a current problem, urging action during this critical period to prevent its escalation.
The authors make several points relevant to the global AI discussions. First, as AI becomes integral to the global economy, warning echo of the looming threat of concentrated corporate control, which risks stifling innovation, compromising consumer privacy, and undermining democratic values. To combat it, the authors advocate for a diverse AI market that includes public, private, and non-profit stakeholders to ensure the technology’s benefits are widely distributed.
In "Stopping Big Tech from Becoming Big AI" we lay out a series of detailed, practical measures to check rising market concentration and keep AI open for all 🧵 pic.twitter.com/gaHjwHwLYL
Second, the report mentions monopolistic risks, through tactics such as exclusive partnerships and control over computing power that allow dominant firms to consolidate power, restricting competition and innovation. Despite often being unseen by consumers, these practices could centralise AI development and inhibit market diversity. As an action point, the authors call on governments to act swiftly using existing regulatory tools, such as blocking mergers and enforcing ex-ante competition policies, to dismantle these barriers and impose fair access rules on essential AI resources.
Finally, international cooperation is one of the key points, particularly the importance of recognising the global nature of AI development. Authors warn against repeating past mistakes of digital market dominance, emphasising the need for a unified approach to AI regulation. Through fostering competition, the report asserts that AI can deliver broader societal benefits, prioritising innovation and privacy over profit maximisation and surveillance.
Why does it matter?
The global community sees the current moment as a pivotal chance to shape AI’s future for the collective good, urging immediate regulatory intervention. Echoing this approach, this report aims to ensure that AI remains a competitive field characterised by transparency and fairness, safeguarding a digital economy that benefits all stakeholders equally.