Teachers fight back against AI misuse

Educators are embracing AI to tackle academic dishonesty, which is increasingly prevalent in digital learning environments. Tools like ChatGPT have made it easier for students to generate entire assignments using AI. To counter this, teachers are employing AI detection tools and innovative strategies to maintain academic integrity.

Understanding AI’s capabilities is crucial in detecting misuse. Educators are advised to familiarise themselves with tools like ChatGPT by testing it with sample assignments. Collecting genuine writing samples from students early in the semester provides a baseline for comparison, helping identify potential AI-generated work. Tools designed specifically to detect AI writing further assist in verifying authenticity.

Requesting rewrites is another effective approach when AI usage is suspected. By asking an AI tool to rewrite a suspected piece, teachers can highlight the telltale signs of machine-generated text, such as a lack of personal style and overuse of synonyms. Strong evidence of AI misuse strengthens cases when addressing cheating with students and school administrators.

The rise of AI in education underscores the need for vigilance. Teachers must balance scepticism with evidence-based methods to ensure fairness. Maintaining a collaborative and transparent approach can help foster a culture of learning over shortcuts.

AI-powered OMEN Max 16 from HP redefines gaming

HP Inc has launched the OMEN Max 16, the world’s first AI-driven gaming laptop, promising unparalleled performance and immersive experiences. Unveiled at CES 2025 on January 6, the device features cutting-edge OMEN AI technology that optimises performance and thermals automatically, ensuring uninterrupted gameplay for even the most demanding titles.

The OMEN AI Beta software is a standout innovation, offering gamers a personalised solution for maximising frames per second (FPS). Designed to eliminate trial-and-error troubleshooting, the software recommends optimised operating system, hardware, and game settings tailored to each unique setup. Starting with support for Counter-Strike, the application is set to expand to more popular games.

In addition to its advanced software, the OMEN Max 16 is equipped with top-tier hardware, including an Intel Core Ultra 9 or AMD Ryzen AI 9 processor and up to 64 GB of DDR5 RAM. These features make it capable of handling even the most resource-intensive games with ease.

HP also introduced the OMEN 32x Smart Gaming Monitor, its first gaming display with built-in Google TV, offering gamers an all-in-one entertainment and gaming solution. With these innovations, HP continues to redefine gaming technology, prioritising performance, personalisation, and ease of use.

Spikerz raises $7 million to fight social media threats

Social media security firm Spikerz has raised $7 million in a seed funding round led by Disruptive AI, with contributions from Horizon Capital, Wix Ventures, Storytime Capital, and BDMI. The funding highlights the growing demand for innovative solutions to combat cyber threats on social platforms.

The startup specialises in protecting social media accounts from phishing attacks, scams, and other risks posed by increasingly sophisticated cybercriminals. Its platform also helps users detect and remove fake accounts, malicious bots, and visibility restrictions like shadowbans. These features are particularly valuable for businesses, influencers, and brands relying on social platforms for growth.

Spikerz plans to use the investment to enhance its AI-driven platform, expand its global reach, and bolster its team. CEO Naveh Ben Dror emphasised the importance of staying ahead of malicious actors who are now leveraging advanced technologies like generative AI. He described the funding as a strong vote of confidence in the company’s mission to secure social media accounts worldwide.

The firm’s efforts come at a critical time when social media platforms play a central role in the success of businesses and creators. With the latest backing, Spikerz aims to provide cutting-edge tools to safeguard these digital livelihoods.

Apple halts AI news summaries after NUJ criticism

Apple has suspended its AI-generated news summary feature after criticism from the National Union of Journalists (NUJ). Concerns were raised over the tool’s inaccurate reporting and its potential role in spreading misinformation.

The NUJ welcomed the decision, emphasising the risks posed by automated reporting. Recent errors in AI-generated summaries highlighted how such tools can undermine public trust in journalism. Calls for a more human-centred approach in reporting were made by NUJ assistant general secretary, Séamus Dooley.

Apple’s decision follows growing scrutiny of AI’s role in journalism. Critics argue that while automation can streamline news delivery, it must not compromise accuracy or credibility.

The NUJ has urged Apple to prioritise transparency and accountability as it further develops its AI capabilities. Safeguarding trust in journalism remains a key concern in the evolving media landscape.

AI and LLMs struggle with historical accuracy in advanced tests

According to a recent study, AI models have shown limitations in tackling high-level historical inquiries. Researchers tested three leading large language models (LLMs) — GPT-4, Llama, and Gemini — using a newly developed benchmark, Hist-LLM. The test, based on the Seshat Global History Databank, revealed disappointing results, with GPT-4 Turbo achieving only 46% accuracy, barely surpassing random guessing.

Researchers from Austria’s Complexity Science Hub presented the findings at the NeurIPS conference last month. Co-author Maria del Rio-Chanona highlighted that while LLMs excel at basic facts, they struggle with nuanced, PhD-level historical questions. Errors included incorrect claims about ancient Egypt’s military and armour development, often due to the models extrapolating from prominent but irrelevant data.

Biases in training data also emerged, with models underperforming on questions related to underrepresented regions like sub-Saharan Africa. Lead researcher Peter Turchin acknowledged these shortcomings but emphasised the potential of LLMs to support historians with future improvements.

Efforts are underway to refine the benchmark by incorporating more diverse data and crafting complex questions. Researchers remain optimistic about AI’s capacity to assist in historical research despite its current gaps.

Spain to allocate 150 million euros for AI integration in companies

Spain’s government has announced a new initiative to promote the adoption of AI technologies across the country’s businesses. Prime Minister Pedro Sanchez revealed on Monday that the government will provide an additional 150 million euros ($155 million) in subsidies aimed at supporting companies in their efforts to integrate AI into their operations.

The funding is designed to help businesses harness the potential of AI, which has become a critical driver of innovation and efficiency in various sectors, from manufacturing to healthcare and finance. The subsidies will be available to companies looking to develop or adopt AI-based solutions, to foster digital transformation and maintain Spain’s competitive edge in the global economy.

Sanchez emphasised that the funding will play a vital role in ensuring Spain remains at the forefront of the digital revolution, helping to build a robust, AI-powered economy. The move comes as part of Spain’s broader strategy to invest in technology and innovation, aiming to enhance productivity and create new opportunities for growth in both the public and private sectors.

Chinese firm MiniMax unveils advanced AI models amid rising tensions

Chinese AI company MiniMax has introduced three new models—MiniMax-Text-01, MiniMax-VL-01, and T2A-01-HD—designed to compete with leading systems developed by firms such as OpenAI and Google. Backed by Alibaba and Tencent, MiniMax has raised $850 million in funding and is valued at over $2.5 billion. The models include a text-only model, a multimodal model capable of processing text and images, and an audio generator capable of creating synthetic speech in multiple languages.

MiniMax-Text-01 boasts a 4-million-token context window, significantly larger than those of competing systems, allowing it to process extensive text inputs. Its performance rivals industry leaders like Google’s Gemini 2.0 Flash in benchmarks measuring problem-solving and comprehension skills. The multimodal MiniMax-VL-01 excels at image-text tasks but trails some competitors on specific evaluations. T2A-01-HD, the audio generator, delivers high-quality synthetic speech and can clone voices using just 10 seconds of recorded audio.

The models, mostly accessible via platforms like GitHub and Hugging Face, come with licensing restrictions that prevent their use in developing competing AI systems. MiniMax has faced controversies, including allegations of unauthorised use of copyrighted data for training and concerns about AI-generated content replicating logos and public figures. The releases coincide with new US restrictions on AI technology exports to China, potentially heightening challenges for Chinese AI firms aiming to compete globally.

Generative AI accelerates US defence strategies

The Pentagon is leveraging generative AI to accelerate critical defence operations, particularly the ‘kill chain’, a process of identifying, tracking, and neutralising threats. According to Dr Radha Plumb, the Pentagon’s Chief Digital and AI Officer, AI’s current role is limited to aiding planning and strategising phases, ensuring commanders can respond swiftly while maintaining human oversight over life-and-death decisions.

Major AI firms like OpenAI and Anthropic have softened their policies to collaborate with defence agencies, but only under strict ethical boundaries. These partnerships aim to balance innovation with responsibility, ensuring AI systems are not used to cause harm directly. Meta, Anthropic, and Cohere are tech giants working with defence contractors, providing tools that optimise operational planning without breaching ethical standards.

In the US, Dr Plumb emphasised that the Pentagon’s AI systems operate as part of human-machine collaboration, countering fears of fully autonomous weapons. Despite debates over AI’s role in defence, officials argue that working with the technology is vital to ensure its ethical application. Critics, however, continue to question the transparency and long-term implications of such alliances.

As AI becomes central to defence strategies, the Pentagon’s commitment to integrating ethical safeguards highlights the delicate balance between technological advancement and human control.

FTC warns of risks in big tech AI partnerships

The Federal Trade Commission (FTC) has raised concerns about the competitive risks posed by collaborations between major technology companies and developers of generative AI tools. In a staff report issued Friday, the agency pointed to partnerships such as Microsoft’s investment in OpenAI and similar alliances involving Amazon, Google, and Anthropic as potentially harmful to market competition, according to TechCrunch.

FTC Chair Lina Khan warned that these collaborations could create barriers for smaller startups, limit access to crucial AI tools, and expose sensitive information. ‘These partnerships by big tech firms can create lock-in, deprive start-ups of key AI inputs, and reveal sensitive information that undermines fair competition,’ Khan stated.

The report specifically highlights the role of cloud service providers like Microsoft, Amazon, and Google, which provide essential resources such as computing power and technical expertise to AI developers. These arrangements could restrict smaller firms’ access to these critical resources, raise business switching costs, and allow cloud providers to gain unique insights into sensitive data, potentially stifling competition.

Microsoft defended its partnership with OpenAI, emphasising its benefits to the industry. ‘This collaboration has enabled one of the most successful AI startups in the world and spurred unprecedented technology investment and innovation,’ said Rima Alaily, Microsoft’s deputy general counsel. The FTC report underscores the need to address the broader implications of big tech’s growing dominance in generative AI.

AFP partnership strengthens Mistral’s global reach

Mistral, a Paris-based AI company, has entered a groundbreaking partnership with Agence France-Presse (AFP) to enhance the accuracy of its chatbot, Le Chat. The deal signals Mistral’s determination to broaden its scope beyond foundational model development.

Through the agreement, Le Chat will gain access to AFP’s extensive archive, which includes over 2,300 daily stories in six languages and records dating back to 1983. While the focus remains on text content, photos and videos are not part of the multi-year arrangement. By incorporating AFP’s multilingual and multicultural resources, Mistral aims to deliver more accurate and reliable responses tailored to business needs.

The partnership bolsters Mistral’s standing against AI leaders like OpenAI and Anthropic, who have also secured similar content agreements. Le Chat’s enhanced features align with Mistral’s broader strategy to develop user-friendly applications that rival popular tools such as ChatGPT and Claude.

Mistral’s co-founder and CEO, Arthur Mensch, emphasised the importance of the partnership, describing it as a step toward offering clients a unique and culturally diverse AI solution. The agreement reinforces Mistral’s commitment to innovation and its global relevance in the rapidly evolving AI landscape.