Alphabet’s Google announced that Brazil will be the first country to test a new anti-theft feature for Android phones, utilising AI to detect and lock stolen devices. The initial test phase will offer three locking mechanisms. One uses AI to identify movement patterns typical of theft and lock the screen. Another allows users to remotely lock their screens by entering their phone number and completing a security challenge from another device. The third feature locks the screen automatically if the device remains offline for an extended period.
These features will be available to Brazilian users with Android phones version 10 or higher starting in July, with a gradual rollout to other countries planned for later this year. Phone theft is a significant issue in Brazil, with nearly 1 million cell phones reported stolen in 2022, marking a 16.6% increase from the previous year.
In response to the rising theft rates, the Brazilian government launched an app called Celular Seguro in December, allowing users to report stolen phones and block access via a trusted person’s device. As of last month, approximately 2 million people had registered with the app, leading to the blocking of 50,000 phones, according to the Justice Ministry.
Turkish authorities have arrested a student for using a makeshift device linked to AI software to cheat during a university entrance exam. The student, who was acting suspiciously, was detained by police during the exam and later formally arrested and sent to jail pending trial. Another individual involved in helping the student was also detained.
A police video from Isparta province showed the student’s setup: a camera disguised as a shirt button connected to AI software through a router hidden in the sole of their shoe. The system allowed the AI to generate correct answers, relayed to the student through an earpiece.
This incident highlights the increasing use of advanced technology in cheating, prompting concerns about exam security and integrity. The authorities are now investigating the extent of this cheating method and considering measures to prevent similar occurrences in the future.
Meta Platforms, the owner of Facebook, announced it is developing AI technology tailored specifically for Europe, taking into account the region’s linguistic, geographic, and cultural nuances. The company will train its large language models using publicly shared content from its platforms, including Instagram and Facebook, ensuring that private posts are excluded to maintain user privacy.
Last month, Meta revealed plans to inform Facebook and Instagram users in Europe and the UK about how their public information is utilised to enhance and develop AI technologies. The move aims to increase transparency and reassure users about data privacy.
By focusing on localised AI development, Meta hopes to serve the European market better, reflecting the region’s diverse characteristics in its technology offerings. That effort underscores Meta’s commitment to respecting user privacy while advancing its AI capabilities.
At Apple’s annual developer conference on Monday, the tech giant is anticipated to unveil how it’s integrating AI across its software suite. The integration includes updates to its Siri voice assistant and a potential collaboration with OpenAI, the owner of ChatGPT. With its reputation on the line, Apple aims to reassure investors that it remains competitive in the AI landscape, especially against rivals like Microsoft.
Apple faces the challenge of demonstrating the value of AI to its vast user base, many of whom are not tech enthusiasts. Analysts suggest that Apple needs to showcase how AI can enhance user experiences, a shift from its previous emphasis on enterprise applications. Despite using AI behind the scenes for years, Apple has been reserved in highlighting its role in device functionality, unlike Microsoft’s more vocal approach with OpenAI.
The spotlight is on Siri’s makeover, which is expected to enable more seamless control over various apps. Apple aims to make Siri smarter by integrating generative AI, potentially through a partnership with OpenAI. The move is anticipated to improve user interactions with Siri across different apps, enhancing its usability and effectiveness. Also, Apple recently introduced an AI-focused chip in its latest iPad Pro models, signalling its commitment to AI development. Analysts predict that Apple will provide developers with insights into leveraging these capabilities to support AI computing. Additionally, reports suggest Apple may discuss its plans for using its chips in data centres, which could enhance cloud computing capabilities while maintaining privacy and security features.
The Apple Worldwide Developers Conference (WWDC 2024) will run until Friday, offering developers insights into app updates and new tools. Investors are hopeful that Apple’s AI advancements will drive sales of new iPhones and boost the company’s competitive edge amid fierce global competition.
Google has issued new guidance for developers building AI apps distributed through Google Play in response to growing concerns over the proliferation of AI-powered apps designed to create deepfake nude images. The platform recently announced a crackdown on such applications, signalling a firm stance against the misuse of AI for generating non-consensual and potentially harmful content.
The move comes in the wake of alarming reports highlighting the ease with which these apps can manipulate photos to create realistic yet fabricated nude images of individuals. Reports have surfaced about apps like ‘DeepNude’ and its clones, which can strip clothes from images of women to produce highly realistic nude photos. Another report detailed the widespread availability of apps that could generate deepfake videos, leading to significant privacy invasions and the potential for harassment and blackmail.
Apps offering AI features have to be ‘rigorously tested’ to safeguard against prompts that generate restricted content and have to provide a way for users to signal it. Google strongly suggests that developers document the recommended tests before launching them, as Google could ask them to be reviewed in the future. Additionally, developers can’t advertise that their app breaks any of Google Play’s rules at the risk of getting banned from the app store. The company is also publishing other resources and best practices, like its People + AI Guidebook, which aims to support developers building AI apps.
Why Does It Matter?
The proliferation of AI-driven deepfake apps on platforms like Google Play undermine personal privacy and consent by allowing anyone to generate highly realistic and often explicit content of individuals without their knowledge or consent. Such misuse can lead to severe reputational damage, harassment, and even extortion, affecting both individuals and public figures alike.
China’s ByteDance, the parent company of TikTok, plans to invest around $2.13 billion to establish an AI hub in Malaysia. The plan includes an additional $320 million to expand data centre facilities in Johor state, according to Malaysia’s Trade Minister Tengku Zafrul Aziz.
The development follows significant investments by other tech giants in Malaysia. Google recently announced a $2 billion investment to create its first data centre and Google Cloud region in the country, while Microsoft is set to invest $2.2 billion to enhance cloud and AI services.
The investment is expected to boost Malaysia’s digital economy, aiming to increase its contribution to 22.6% of its GDP by 2025, underscoring the county’s growing importance as a digital economy hub in Southeast Asia.
The surge in AI, particularly with systems like ChatGPT, is facing a potential slowdown due to the impending depletion of publicly available text data, according to a study by Epoch AI. The shortage is projected to occur between 2026 and 2032, highlighting a critical challenge in maintaining the rapid advancement of AI.
AI’s growth has relied heavily on vast amounts of human-generated text data, but this finite resource is diminishing. Companies like OpenAI and Google are currently purchasing high-quality data sources, such as content from Reddit and news outlets, to sustain their AI training. However, the scarcity of fresh data might soon force them to consider using sensitive private data or less reliable synthetic data.
The Epoch AI study emphasises that scaling AI models, which requires immense computing power and large data sets, may become unfeasible as data sources dwindle. While new techniques have somewhat mitigated this issue, the fundamental need for high-quality human-generated data remains. Some experts suggest focusing on specialised AI models rather than larger ones to address this bottleneck.
In response to these challenges, AI developers are exploring alternative methods, including generating synthetic data. However, concerns about the quality and efficiency of such data persist, underlining the complexity of sustaining AI advancements in the face of limited natural resources.
Reporters Without Borders (RSF) has praised the Council of Europe’s (CoE) new Framework Convention on AI for its progress but criticised its reliance on private sector self-regulation. The Convention, which includes 46 European countries, aims to address the impact of AI on human rights, democracy, and the rule of law. While it acknowledges the threat of AI-fueled disinformation, RSF argues that it fails to provide the necessary mechanisms to achieve its goals.
The CoE Convention mandates strict regulatory measures for AI use in the public sector but allows member states to choose self-regulation for the private sector. RSF believes this distinction is a critical flaw, as the private sector, particularly social media companies and other digital service providers, have historically prioritised business interests over the public good. According to RSF, this approach will not effectively combat the disinformation challenges posed by AI.
RSF urges countries that adopt the Convention to implement robust national legislation to strictly regulate AI development and use. That would ensure that AI technologies are deployed ethically and responsibly, protecting the integrity of information and democratic processes. Vincent Berthier, Head of RSF’s Tech Desk, emphasised the need for legal requirements over self-regulation to ensure AI serves the public interest and upholds the right to reliable information.
RSF’s recommendations provide a framework for AI regulation that addresses the shortcomings of both the Council of Europe’s Framework Convention and the European Union’s AI Act, advocating for stringent measures to safeguard the integrity of information and democracy.
According to European banking executives, the rise of AI is increasing banks’ reliance on major US tech firms, raising new risks for the financial industry. AI, already used in detecting fraud and money laundering, has gained significant attention following the launch of OpenAI’s ChatGPT in late 2022, with banks exploring more applications of generative AI.
At a fintech conference in Amsterdam, industry leaders expressed concerns about the heavy computational power needed for AI, which forces banks to depend on a few big tech providers. Bahadir Yilmaz, ING’s chief analytics officer, noted that this dependency on companies like Microsoft, Google, IBM, and Amazon poses one of the biggest risks, as it could lead to ‘vendor lock-in’ and limit banks’ flexibility. These facts also imply the strong impact AI could have on retail investor protection.
Britain has proposed regulations to manage financial firms’ reliance on external tech companies, reflecting concerns that issues with a single cloud provider could disrupt services across multiple financial institutions. Deutsche Bank’s technology strategy head, Joanne Hannaford, highlighted that accessing the necessary computational power for AI is feasible only through Big Tech.
The European Union’s securities watchdog recently emphasised that banks and investment firms must protect customers when using AI and maintain boardroom responsibility.
Top officials at the US Federal Election Commission (FEC) are divided over a proposal requiring political advertisements on broadcast radio and television to disclose if their content is generated by AI. FEC Vice Chair Ellen Weintraub backs the proposal, initiated by FCC Chairwoman Jessica Rosenworcel, which aims to enhance transparency in political ads, whereas FEC Chair Sean Cooksey opposes it.
The proposal, which does not ban AI-generated content, comes amid increasing concerns in Washington that such content could mislead voters in the upcoming 2024 elections. Rosenworcel emphasised the risk of ‘deepfakes’ and other altered media misleading the public and noted that the FCC has long-standing authority to mandate disclosures. Weintraub also highlighted the importance of transparency for public benefit and called for collaborative regulatory efforts between the FEC and FCC.
However, Cooksey warned that mandatory disclosures might conflict with existing laws and regulations, creating confusion in political campaigns. Republican FCC Commissioner Brendan Carr criticised the proposal, pointing out inconsistencies in regulation, as the FCC cannot oversee internet, social media, or streaming service ads. The debate gained traction following an incident in January where a fake AI-generated robocall impersonating US President Joe Biden aimed to influence New Hampshire’s Democratic primary, leading to charges against a Democratic consultant.