Educators are embracing AI to tackle academic dishonesty, which is increasingly prevalent in digital learning environments. Tools like ChatGPT have made it easier for students to generate entire assignments using AI. To counter this, teachers are employing AI detection tools and innovative strategies to maintain academic integrity.
Understanding AI’s capabilities is crucial in detecting misuse. Educators are advised to familiarise themselves with tools like ChatGPT by testing it with sample assignments. Collecting genuine writing samples from students early in the semester provides a baseline for comparison, helping identify potential AI-generated work. Tools designed specifically to detect AI writing further assist in verifying authenticity.
Requesting rewrites is another effective approach when AI usage is suspected. By asking an AI tool to rewrite a suspected piece, teachers can highlight the telltale signs of machine-generated text, such as a lack of personal style and overuse of synonyms. Strong evidence of AI misuse strengthens cases when addressing cheating with students and school administrators.
The rise of AI in education underscores the need for vigilance. Teachers must balance scepticism with evidence-based methods to ensure fairness. Maintaining a collaborative and transparent approach can help foster a culture of learning over shortcuts.
HP Inc has launched the OMEN Max 16, the world’s first AI-driven gaming laptop, promising unparalleled performance and immersive experiences. Unveiled at CES 2025 on January 6, the device features cutting-edge OMEN AI technology that optimises performance and thermals automatically, ensuring uninterrupted gameplay for even the most demanding titles.
The OMEN AI Beta software is a standout innovation, offering gamers a personalised solution for maximising frames per second (FPS). Designed to eliminate trial-and-error troubleshooting, the software recommends optimised operating system, hardware, and game settings tailored to each unique setup. Starting with support for Counter-Strike, the application is set to expand to more popular games.
In addition to its advanced software, the OMEN Max 16 is equipped with top-tier hardware, including an Intel Core Ultra 9 or AMD Ryzen AI 9 processor and up to 64 GB of DDR5 RAM. These features make it capable of handling even the most resource-intensive games with ease.
HP also introduced the OMEN 32x Smart Gaming Monitor, its first gaming display with built-in Google TV, offering gamers an all-in-one entertainment and gaming solution. With these innovations, HP continues to redefine gaming technology, prioritising performance, personalisation, and ease of use.
Social media security firm Spikerz has raised $7 million in a seed funding round led by Disruptive AI, with contributions from Horizon Capital, Wix Ventures, Storytime Capital, and BDMI. The funding highlights the growing demand for innovative solutions to combat cyber threats on social platforms.
The startup specialises in protecting social media accounts from phishing attacks, scams, and other risks posed by increasingly sophisticated cybercriminals. Its platform also helps users detect and remove fake accounts, malicious bots, and visibility restrictions like shadowbans. These features are particularly valuable for businesses, influencers, and brands relying on social platforms for growth.
Spikerz plans to use the investment to enhance its AI-driven platform, expand its global reach, and bolster its team. CEO Naveh Ben Dror emphasised the importance of staying ahead of malicious actors who are now leveraging advanced technologies like generative AI. He described the funding as a strong vote of confidence in the company’s mission to secure social media accounts worldwide.
The firm’s efforts come at a critical time when social media platforms play a central role in the success of businesses and creators. With the latest backing, Spikerz aims to provide cutting-edge tools to safeguard these digital livelihoods.
Apple has suspended its AI-generated news summary feature after criticism from the National Union of Journalists (NUJ). Concerns were raised over the tool’s inaccurate reporting and its potential role in spreading misinformation.
The NUJ welcomed the decision, emphasising the risks posed by automated reporting. Recent errors in AI-generated summaries highlighted how such tools can undermine public trust in journalism. Calls for a more human-centred approach in reporting were made by NUJ assistant general secretary, Séamus Dooley.
Apple’s decision follows growing scrutiny of AI’s role in journalism. Critics argue that while automation can streamline news delivery, it must not compromise accuracy or credibility.
The NUJ has urged Apple to prioritise transparency and accountability as it further develops its AI capabilities. Safeguarding trust in journalism remains a key concern in the evolving media landscape.
OpenAI plans to introduce AI ‘super-agents’ designed to handle complex tasks at an expert level, according to a report by Axios. These advanced systems aim to perform intricate, goal-oriented tasks, far surpassing current AI chatbot capabilities. The announcement is expected within weeks, sparking widespread interest and scepticism alike.
twitter hype is out of control again.
we are not gonna deploy AGI next month, nor have we built it.
we have some very cool stuff for you but pls chill and cut your expectations 100x!
CEO Sam Altman’s recent engagements in Washington DC, including a scheduled closed-door meeting with US officials, have intensified speculation. Social media rumours suggested a breakthrough in artificial general intelligence (AGI), prompting Altman to clarify that OpenAI has not developed AGI nor plans to deploy it soon. Despite this, the proposed super-agents are projected to be transformative, with potential applications ranging from software creation to business operations.
Critics argue the claims may be overhyped. Notable figures like computer scientist Gary Marcus dismissed the feasibility of achieving such advancements in the near term. Concerns about reliability and persistent issues like information hallucination remain significant barriers to broader adoption.
Controversy also surrounds OpenAI’s flagship AI model, o3, and its reliance on a benchmark test developed by Epoch AI, a group funded by OpenAI. The FrontierMath test, intended to measure mathematical prowess, has faced scrutiny over its role in showcasing the model’s capabilities.
According to a recent study, AI models have shown limitations in tackling high-level historical inquiries. Researchers tested three leading large language models (LLMs) — GPT-4, Llama, and Gemini — using a newly developed benchmark, Hist-LLM. The test, based on the Seshat Global History Databank, revealed disappointing results, with GPT-4 Turbo achieving only 46% accuracy, barely surpassing random guessing.
Researchers from Austria’s Complexity Science Hub presented the findings at the NeurIPS conference last month. Co-author Maria del Rio-Chanona highlighted that while LLMs excel at basic facts, they struggle with nuanced, PhD-level historical questions. Errors included incorrect claims about ancient Egypt’s military and armour development, often due to the models extrapolating from prominent but irrelevant data.
Biases in training data also emerged, with models underperforming on questions related to underrepresented regions like sub-Saharan Africa. Lead researcher Peter Turchin acknowledged these shortcomings but emphasised the potential of LLMs to support historians with future improvements.
Efforts are underway to refine the benchmark by incorporating more diverse data and crafting complex questions. Researchers remain optimistic about AI’s capacity to assist in historical research despite its current gaps.
Spain’s government has announced a new initiative to promote the adoption of AI technologies across the country’s businesses. Prime Minister Pedro Sanchez revealed on Monday that the government will provide an additional 150 million euros ($155 million) in subsidies aimed at supporting companies in their efforts to integrate AI into their operations.
The funding is designed to help businesses harness the potential of AI, which has become a critical driver of innovation and efficiency in various sectors, from manufacturing to healthcare and finance. The subsidies will be available to companies looking to develop or adopt AI-based solutions, to foster digital transformation and maintain Spain’s competitive edge in the global economy.
Sanchez emphasised that the funding will play a vital role in ensuring Spain remains at the forefront of the digital revolution, helping to build a robust, AI-powered economy. The move comes as part of Spain’s broader strategy to invest in technology and innovation, aiming to enhance productivity and create new opportunities for growth in both the public and private sectors.
Meta has announced a deal to purchase 200 megawatts of solar power from multinational utility Engie. The move bolsters the tech giant’s renewable energy portfolio, which now exceeds 12 gigawatts. The new solar farm, located in Texas, is near one of Meta’s existing data centres and is expected to become operational by 2025.
The push for renewable energy comes as tech companies face rising power demands driven by AI development and the rapid construction of data centres. Meta recently revealed plans for a 2-gigawatt data centre in Louisiana, relying on natural gas. The firm has also expressed interest in nuclear power, seeking proposals for up to 4 gigawatts of nuclear energy by the early 2030s.
While nuclear energy garners significant attention, renewable sources are crucial in powering today’s tech infrastructure. Meta’s solar energy deal mirrors efforts by other tech giants like Google and Microsoft, which have secured multi-billion-dollar renewable energy agreements. As companies race to meet energy needs, the speed of renewable energy deployment continues to offer a competitive edge over emerging nuclear options.
Chinese AI company MiniMax has introduced three new models—MiniMax-Text-01, MiniMax-VL-01, and T2A-01-HD—designed to compete with leading systems developed by firms such as OpenAI and Google. Backed by Alibaba and Tencent, MiniMax has raised $850 million in funding and is valued at over $2.5 billion. The models include a text-only model, a multimodal model capable of processing text and images, and an audio generator capable of creating synthetic speech in multiple languages.
MiniMax-Text-01 boasts a 4-million-token context window, significantly larger than those of competing systems, allowing it to process extensive text inputs. Its performance rivals industry leaders like Google’s Gemini 2.0 Flash in benchmarks measuring problem-solving and comprehension skills. The multimodal MiniMax-VL-01 excels at image-text tasks but trails some competitors on specific evaluations. T2A-01-HD, the audio generator, delivers high-quality synthetic speech and can clone voices using just 10 seconds of recorded audio.
The models, mostly accessible via platforms like GitHub and Hugging Face, come with licensing restrictions that prevent their use in developing competing AI systems. MiniMax has faced controversies, including allegations of unauthorised use of copyrighted data for training and concerns about AI-generated content replicating logos and public figures. The releases coincide with new US restrictions on AI technology exports to China, potentially heightening challenges for Chinese AI firms aiming to compete globally.
The Pentagon is leveraging generative AI to accelerate critical defence operations, particularly the ‘kill chain’, a process of identifying, tracking, and neutralising threats. According to Dr Radha Plumb, the Pentagon’s Chief Digital and AI Officer, AI’s current role is limited to aiding planning and strategising phases, ensuring commanders can respond swiftly while maintaining human oversight over life-and-death decisions.
Major AI firms like OpenAI and Anthropic have softened their policies to collaborate with defence agencies, but only under strict ethical boundaries. These partnerships aim to balance innovation with responsibility, ensuring AI systems are not used to cause harm directly. Meta, Anthropic, and Cohere are tech giants working with defence contractors, providing tools that optimise operational planning without breaching ethical standards.
In the US, Dr Plumb emphasised that the Pentagon’s AI systems operate as part of human-machine collaboration, countering fears of fully autonomous weapons. Despite debates over AI’s role in defence, officials argue that working with the technology is vital to ensure its ethical application. Critics, however, continue to question the transparency and long-term implications of such alliances.
As AI becomes central to defence strategies, the Pentagon’s commitment to integrating ethical safeguards highlights the delicate balance between technological advancement and human control.