Google is planning to roll out new features that will enable the identification of images that have been generated or edited using AI in search results. This update will highlight such photos in the ‘About this image’ section across Google Search, Google Lens, and the Circle to Search feature on Android. In the future, this disclosure feature may also be extended to other Google platforms like YouTube.
To achieve this, Google will utilise C2PA metadata developed by the Coalition for Content Provenance and Authenticity. This metadata tracks an image’s history, including its creation and editing process. However, the adoption of C2PA standards is limited, and metadata can be altered or removed, which may impact the reliability of this identification method.
Despite the challenges, Google’s action addresses the increasing concerns about deepfakes and AI-generated content. There have been reports of a significant rise in scams involving such content, and losses related to deepfakes are expected to increase dramatically in the coming years. As public concern about deepfakes and AI-driven misinformation grows, Google’s initiative aims to provide more transparency in digital media.
The Japanese Technology Corporation, NEC (Nippon Electric Company), is developing an AI technology designed to analyze and verify the trustworthiness of online information. The project, launched under Japan’s Ministry of Internal Affairs and Communications, aims to help combat false and misleading content on the internet. The system will be tested by fact-checking organizations, including the Japan Fact-check Center and major media outlets, with the goal of making it widely available by 2025.
The AI uses Large Language Models (LLMs) to assess different types of content such as text, images, video, and audio, detecting whether they have been manipulated or are misleading. The system then evaluates the information’s reliability, looking for inconsistencies and ensuring accurate sources. These reports allow for user-driven adjustments, such as removing unreliable information or adding new details, to enhance fact-checking operations helping organizations streamline their verification processes.
As the project progresses, NEC hopes to refine its AI system to assist fact-checkers more effectively, ensuring that false information can be identified and addressed in real time. The technology could become a vital tool for media and fact-checking organizations, addressing the growing problem of misinformation online.
The US Federal Communications Commission (FCC) has introduced new proposals to regulate AI-generated communications in telecommunications. That initiative, detailed in a Notice of Proposed Rulemaking (NPRM) and a Notice of Inquiry (NOI) released in August, seeks to define and manage the use of AI in outbound calls and text messages.
The NPRM proposes defining an ‘AI-generated call’ as one utilising AI technologies—such as machine learning algorithms or predictive models—to produce artificial or prerecorded voice or text content. The rules would require callers to disclose AI use and obtain specific consent from consumers, ensuring greater transparency and control over AI-generated communications.
In addition to defining and regulating AI-generated calls, the NPRM includes provisions to address the needs of individuals with speech or hearing disabilities. It proposes an exemption from certain TCPA requirements for AI-generated calls made by these individuals, provided such calls are not for telemarketing or advertising. That exemption aims to facilitate communication for those who depend on AI technologies for telephone interactions, balancing regulatory requirements with accessibility needs.
The NOI, on the other hand, seeks feedback on technologies designed to detect, alert, and block potentially fraudulent or AI-generated calls, exploring their development and privacy implications. It questions how these technologies handle call content data and whether current privacy laws are adequate.
The FCC also invites comments on the potential costs and benefits of the proposed rules and asserts that its authority to implement them is grounded in the Telephone Consumer Protection Act (TCPA). As the comment deadlines approach, the FCC anticipates a thorough discussion on these regulatory changes, which could significantly impact how AI technologies are managed in telecommunications.
BlackRock and Microsoft have announced plans to create a significant investment fund of over $30 billion to develop infrastructure for AI. The fund-Global AI Infrastructure Investment Partnership will focus on building data centres and energy projects to support the growing computational demands of AI technologies. As AI models, particularly those involved in deep learning and large-scale data processing, require immense processing power, these investments are critical to meet the rising energy and infrastructure needs.
The surge in demand for AI has driven tech companies to link thousands of chips together in large clusters to process massive amounts of data, fueling the necessity for specialised data centres. BlackRock and Microsoft’s partnership aims to strengthen AI supply chains and improve energy sourcing to support these advancements. Abu Dhabi-backed investment company MGX will also join as a general partner in the venture, while AI chip leader Nvidia will provide its technical expertise to guide the initiative.
The partnership can mobilise up to $100 billion in investment when debt financing is included. Most of this investment will be in the US, with the rest targeted in partner countries. This ambitious collaboration means a rapidly expanding need for AI infrastructure and the commitment from major global players to fuel its growth.
GSMA has launched its inaugural Responsible AI (RAI) Maturity Roadmap, marking a significant step toward ethical AI practices across the telecom sector. That initiative represents the first sector-wide effort to unify approaches to responsible AI use, providing telecom operators with a structured framework to assess their current AI maturity and set clear goals for future improvement.
The roadmap integrates global standards and regulations from organisations such as the OECD and UNESCO, ensuring its guidelines are comprehensive and internationally recognised. This alignment supports the creation of a robust framework that promotes ethical AI practices throughout the industry.
GSMA and industry leaders emphasise the substantial economic potential of AI, with projections suggesting up to $680 billion in opportunities for the telecom sector over the next 15-20 years. The roadmap focuses on five core dimensions—vision and strategic goals, AI governance, technical controls, third-party collaboration, and change management—providing a comprehensive approach to responsible AI. That includes best practices such as fairness, privacy, safety, transparency, accountability, and environmental impact.
Why does this matter?
Statements from GSMA Director General Mats Granryd and Telefónica Chairman José María Álvarez-Pallete López highlight the need for ethical guidelines to manage AI’s rapid development and set a precedent for other industries to follow in adopting responsible AI practices.
Telecommunications Industry Ireland (TII) advocates a reduction in VAT on internet access services delivered via fibre and 5G fixed wireless access (FWA) from 23% to 13.5%. This proposed cut is designed to support the National Connectivity Strategy’s goals, targeted for achievement by 2028.
Furthermore, TII views this VAT reduction as essential for bridging the digital divide, particularly in rural areas, by making high-speed internet more affordable and ensuring equitable access. Continuous upgrades to telecom infrastructure are also vital for meeting the demands of remote working, online education, and other digital services.
As data traffic surges due to digital transformation and AI adoption, ongoing investment in infrastructure becomes crucial for maintaining Ireland’s competitive edge and realising broader economic and social benefits. On the other hand, Telecommunications Industry Ireland (TII) highlights the significant economic impact of the telecommunications sector.
The sector employs 24,000 people with an annual payroll of €1.6 billion, and it has invested approximately €3.5 billion in network infrastructure over the past five years. Additionally, it contributes €2.7 billion annually to local suppliers. This substantial economic footprint underscores the sector’s critical role in Ireland’s economy and emphasises the necessity for supportive fiscal policies to sustain its growth and investment.
Microsoft is enhancing its $30-per-user Microsoft 365 Copilot subscription with new AI-driven features across Office apps. Excel now integrates Python with Copilot for advanced data analysis, while PowerPoint offers improved AI-assisted narrative building, and Word benefits from more efficient AI-generated drafts. The Copilot AI will also assist with organising Outlook inboxes.
Excel’s Python integration allows users to perform complex data analysis, such as forecasting and machine learning, using natural language commands. PowerPoint’s AI features can now help draft slide decks using company templates, and Teams will summarise both spoken and written conversations in meetings, helping organisers track important questions.
Outlook users will soon benefit from AI-powered inbox prioritisation, with Copilot sorting emails based on personal preferences. Additionally, the AI will be able to track keywords or topics, marking related emails as high priority. Word and OneDrive will also see updates, allowing users to reference data from emails, meetings, and documents seamlessly.
Microsoft aims to attract more businesses to Copilot, with Vodafone signing up for 68,000 licenses after successful trials. Microsoft reports that 60% of Fortune 500 companies now use Copilot, with daily usage nearly doubling each quarter.
The company has announced a new $60 billion share buyback program, approved by its board, alongside a quarterly dividend increase to $0.83 per share, reflecting a 10% rise. The Tech Giant will host its yearly shareholders’ meeting on December 10th.
Amid growing AI investments, Microsoft revealed a significant 77.6% increase in capital spending in the quarter ending 30 June, largely attributed to AI infrastructure. Although its Azure cloud business has exhibited slower growth recently, the company anticipates an acceleration in the second half of fiscal 2025.
Big tech firms like Microsoft and Google are under pressure to justify their AI investments. Microsoft is one of the few companies that has reported AI’s contributions in its earnings. Its stock has risen about 15% this year and saw a slight increase in aftermarket trading following the news.
A group of technology experts has launched a global call for ‘Humanity’s Last Exam‘ aiming to push AI systems to their limits by posing the most difficult questions possible. The Center for AI Safety (CAIS) and Scale AI are leading an initiative to establish when AI achieves expert-level capabilities. Current benchmark tests have become too easy for many AI models, so this effort aims to create a new exam that emphasises abstract reasoning, an area in which AI still faces challenges. The organisers hope this new exam will remain relevant as AI technology evolves.
The demand for more rigorous tests comes after OpenAI released its newest model, OpenAI o1, which has shown strong performance in traditional reasoning benchmarks. Dan Hendricks, executive director of CAIS, stated that AI systems like Anthropic’s Claude model had significantly improved standard tests, rendering these benchmarks less valuable. However, AI has struggled with more intricate tasks like planning and visual pattern recognition, highlighting the necessity for more advanced assessments.
The exam will include over 1,000 crowd-sourced questions that are challenging even for non-experts. Its goal is to prevent AI from simply memorising answers by keeping some questions private. Participants have until 1 November to submit questions, and there will be rewards for the best contributions. While the exam is designed to test AI thoroughly, questions about weapons will be excluded to avoid potential risks.
The company behind the popular AI chatbot ChatGPT, OpenAI, has announced that its newly established Safety and Security Committee will now operate independently to oversee the development and deployment of its AI models. This decision follows the committee’s recent recommendations, which were released publicly for the first time. Formed in May, the committee’s goal is to enhance and refine OpenAI’s safety practices amid growing concerns about AI’s ethical use and potential biases.
The committee will be led by Zico Kolter, a professor at Carnegie Mellon University and a member of OpenAI’s board. Under its guidance, OpenAI plans to implement an ‘Information Sharing and Analysis Center’ to facilitate cybersecurity information exchange within the AI industry. Additionally, the company is focusing on improving internal security measures and increasing transparency regarding the capabilities and risks associated with its AI technologies.
In a related development, OpenAI has also partnered with the US government to research and evaluate its AI models further. This move underscores the company’s commitment to addressing both the opportunities and challenges posed by AI as it continues to evolve.