Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities

Apple, Microsoft, and Google are spearheading a technological revolution with their vision of AI smartphones and computers. These advanced devices aim to automate tasks like photo editing and sending birthday wishes, promising a seamless user experience. However, to achieve this level of functionality, these tech giants are seeking increased access to user data.

In this evolving landscape, users are confronted with the decision of whether to share more personal information. Windows computers may capture screenshots of user activities, iPhones could aggregate data from various apps, and Android phones might analyse calls in real time to detect potential scams. The shift towards data-intensive operations raises concerns about privacy and security, as companies require deeper insights into user behaviour to deliver tailored services.

The emergence of OpenAI’s ChatGPT has catalysed a transformation in the tech industry, prompting major players like Apple, Google, and Microsoft to revamp their strategies and invest heavily in AI-driven services. The focus is on creating a dynamic computing interface that continuously learns from user interactions to provide proactive assistance, an essential strategy for the future. While the potential benefits of AI integration are substantial, inherent security risks are associated with the increased reliance on cloud computing and data processing. As AI algorithms demand more computational power, sensitive personal data may need to be transmitted to external servers for analysis. The data transfer to the cloud introduces vulnerabilities, potentially exposing user information to unauthorised access by third parties.

Against this backdrop, tech companies have emphasised their commitment to safeguarding user data, implementing encryption and stringent protocols to protect privacy. As users navigate this evolving landscape of AI-driven technologies, understanding the implications of data sharing and the mechanisms employed to protect privacy is crucial. Apple, Microsoft, and Google are at the forefront of integrating AI into their products and services, each with a unique data privacy and security approach. Apple, for instance, unveiled Apple Intelligence, a suite of AI services integrated into its devices, promising enhanced functionalities like object removal from photos and intelligent text responses. Apple is also revamping its voice assistant, Siri, to enhance its conversational abilities and provide it with access to data from various applications.

The company aims to process AI data locally to minimise external exposure, with stringent measures in place to secure data transmitted to servers. Apple’s commitment to protecting user data differentiates it from other companies that retain data on their servers. However, concerns have been raised about the lack of transparency regarding Siri requests sent to Apple’s servers. Security researcher Matthew Green argued that there are inherent security risks to any data leaving a user’s device for processing in the cloud.

Microsoft has introduced AI-powered features in its new Windows computers called Copilot+ PC, ensuring data privacy and security through a new chip and other technologies. The Recall system enables users to quickly retrieve documents and files by typing casual phrases, with the computer taking screenshots every five seconds for analysis directly on the PC. While Recall offers enhanced functionality, security researchers caution about potential risks if the data is hacked. Google has also unveiled a suite of AI services, including a scam detector for phone calls and an ‘Ask Photos’ feature. The scam detector operates on the phone without Google listening to calls, enhancing user security. However, concerns have been raised about the transparency of Google’s approach to AI privacy, particularly regarding the storage and potential use of personal data for improving its services.

Why does it matter?

As these tech giants continue to innovate with AI technologies, users must weigh the benefits of enhanced functionalities against potential privacy and security risks associated with data processing and storage in the cloud. Understanding how companies handle user data and ensuring transparency in data practices are essential for maintaining control over personal information in the digital age.

Toys ‘R’ Us pioneers text to video AI-generated advertising clip

Company founder Charles Lazarus showcased Toys ‘R’ Us AI-generated short promotional clip at the 2024 Cannes Lions Festival in France. In an effort to regenerate interest in the brand, Lazarus collaborated with the creative agency Native Foreign using OpenAI’s Sora. Except for a few tweaks that require human input the 66-seconds video was created entirely by Sora. 

Sora is a diffusion model capable of three modes of video creation. In addition to using text prompts to create videos, it is capable of extending, modifying and filling in missing frames in an existing video. For instance, it can take a video with static noise and gradually transform it by removing the noise over many steps. The AI model can also animate elements in an image.

Company CMO, Kim Miller, informed festival attendees that despite the film’s AI base, production required human involvement throughout the process, including a dependence on intuition. The film received mixed reviews on social media, with some even calling it creepy.

Industry leaders unite for ethical AI data practices

Several companies that license music, images, videos, and other datasets for training AI systems have formed the first trade group in the sector, the Dataset Providers Alliance (DPA). The founding members of the DPA include Rightsify, vAIsual, Pixta, and Datarade. The group aims to advocate for ethical data sourcing, including protecting intellectual property rights and ensuring rights for individuals depicted in datasets.

The rise of generative AI technologies has led to backlash from content creators and numerous copyright lawsuits against major tech companies like Google, Meta, and OpenAI. Developers often train AI models using vast amounts of content, much of which is scraped from the internet without permission. To address these issues, the DPA will establish ethical standards for data transactions, ensuring that members do not sell data obtained without explicit consent. The alliance will also push for legislative measures in the NO FAKES Act, penalising unauthorised digital replicas of voices or likenesses and supporting transparency requirements in AI training data.

The DPA plans to release a white paper in July outlining its positions and advocating for these standards and legislative changes to ensure ethical practices in AI data sourcing and usage.

Qatar’s tech hub expansion accelerates as AI boosts its growth

Qatar is rapidly advancing in technology, positioning itself as a global tech hub. The President of Qatar Science and Technology Park (QSTP), Dr Jack Lau, highlighted the role of AI in boosting the Qatari and GCC markets, emphasising the need for region-specific, tailored solutions.

AI applications like ChatGPT are well-researched in Qatar. However, optimization for different languages, increased speed, and more accurate responses have yet to be implemented.

Dr Lau noted the satisfaction with emerging AI tools, particularly in translating and customising presentation content. He stressed the importance of cultural sensitivity and corporate-specific needs in AI applications while ensuring data privacy and security, underscoring also that these technologies have significant potential for perfection and further development.

QSTP is crucial in supporting Qatar’s national vision of talent transformation through education, innovation, and entrepreneurship. The organisation is exploring opportunities for individuals with the right educational background to contribute significantly to AI, robotics, medical sciences, and sustainable farming.

CLTR urges UK government to create formal system for managing AI misuse and malfunctions

The UK should implement a system to log misuse and malfunctions in AI to keep ministers informed of alarming incidents, according to a report by the Centre for Long-Term Resilience (CLTR). The think tank, which focuses on responses to unforeseen crises, urges the next government to establish a central hub for recording AI-related episodes across the country, similar to the Air Accidents Investigation Branch.

CLTR highlights that since 2014, news outlets have recorded 10,000 AI ‘safety incidents,’ documented in a database by the Organisation for Economic Co-operation and Development (OECD). These incidents range from physical harm to economic, reputational, and psychological damage. Examples include a deepfake of Labour leader Keir Starmer and Google’s Gemini model depicting World War II soldiers inaccurately. The report’s author, Tommy Shaffer Shane, stresses that incident reporting has been transformative in aviation and medicine but is largely missing in AI regulation.

The think tank recommends the UK government adopt a robust incident reporting regime to manage AI risks effectively. It suggests following the safety protocols of industries like aviation and medicine, as many AI incidents may go unnoticed due to the lack of a dedicated AI regulator. Labour has pledged to introduce binding regulations for advanced AI companies, and CLTR emphasises that such a setup would help the government anticipate and respond quickly to AI-related issues.

Additionally, CLTR advises creating a pilot AI incident database, which could collect episodes from existing bodies such as the Air Accidents Investigation Branch and the Information Commissioner’s Office. The think tank also calls for UK regulators to identify gaps in AI incident reporting and build on the algorithmic transparency reporting standard already in place. An effective incident reporting system would help the Department for Science, Innovation and Technology (DSIT) stay informed and address novel AI-related harms proactively.

Meta launches AI chatbot in India, rivaling Google’s Gemini

Meta has officially introduced its AI chatbot, powered by Llama 3, to all users in India following comprehensive testing during the country’s general elections. Initially trialled on WhatsApp, Instagram, Messenger, and Facebook since April, the chatbot is now fully accessible through the search bars in these apps and the Meta.AI website. Despite supporting only English, its functionalities are on par with other major AI services like ChatGPT and Google’s Gemini, including tasks such as suggesting recipes, planning workouts, writing emails, summarising text, recommending Instagram Reels, and answering questions about Facebook posts.

The launch aims to capitalise on India’s vast user base, notably the 500 million WhatsApp users, by embedding the chatbot deeper into the user experience. However, some limitations have been observed, such as the chatbot’s inability to fully understand the context of group conversations, except in direct mentions or replies. Moreover, while it cannot be disabled, users can choose not to interact with it during searches.

Despite its capabilities, Meta AI has faced criticisms for biases in its image generation, often depicting Indian men with turbans and producing images of traditional Indian houses, which Meta has acknowledged and aims to address through ongoing updates. The launch coincides with Google releasing its Gemini app in India, which, unlike Meta’s chatbot, supports multiple local languages, potentially giving Google a competitive advantage in the linguistically diverse Indian market.

Why does it matter?

In summary, Meta’s rollout of its English-only AI chatbot in India is a strategic effort to leverage its extensive user base by offering robust functionalities similar to established competitors. While it faces initial limitations and biases, Meta is actively working on improvements. The concurrent release of Google’s Gemini app sets up a competitive landscape, underscoring the dynamic and evolving nature of AI services in India.

Central banks urged to embrace AI

The Bank for International Settlements (BIS) has advised central banks to harness the benefits of AI while cautioning against its use in replacing human decision-makers. In its first comprehensive report on AI, the BIS highlighted the technology’s potential to enhance real-time data monitoring and improve inflation predictions – capabilities that have become critical following the unforeseen inflation surges during the COVID-19 pandemic and the Ukraine crisis. While AI models could mitigate future risks, their unproven and sometimes inaccurate nature makes them unsuitable as autonomous rate setters, emphasised Cecilia Skingsley of the BIS. Human accountability remains crucial for decisions on borrowing costs, she noted.

The BIS, often termed the central bank for central banks, is already engaged in eight AI-focused projects to explore the technology’s potential. Hyun Song Shin, the BIS’s head of research, stressed that AI should not be seen as a ‘magical’ solution but acknowledged its value in detecting financial system vulnerabilities. However, he also warned of the risks associated with AI, such as new cyber threats and the possibility of exacerbating financial crises if mismanaged.

The widespread adoption of AI could significantly impact labour markets, productivity, and economic growth, with firms potentially adjusting prices more swiftly in response to economic changes, thereby influencing inflation. The BIS has called for the creation of a collaborative community of central banks to share experiences, best practices, and data to navigate the complexities and opportunities presented by AI. That collaboration aims to ensure AI’s integration into financial systems is both effective and secure, promoting resilient and responsive economic governance.

In conclusion, the BIS’s advisory underscores the importance of balancing AI’s promising capabilities with the necessity for human intervention in central banking operations. By fostering an environment for shared knowledge and collaboration among central banks, the BIS seeks to maximise AI benefits while mitigating inherent risks, thereby supporting more robust economic management in the face of technological advancements.

AI startup Etched to produce $120M worth specialised chip

Etched, an AI startup based in San Francisco, announced that it secured $120 million, aiming to create a specialised kind of chip tailored to run a specific type of AI model commonly used by OpenAI’s ChatGPT and Google’s Gemini.

Unlike Nvidia, which dominates the market for server AI chips with a roughly 80% market share, Etched aims to create a specialized processor optimized for running inference tasks. The produced chip would focus on generating content and responses, which is particularly suited for transformer-based AI models. The company’s CEO, Gavin Uberti, sees this as a strategic bet on the longevity of transformer models in the AI landscape.

In Etched’s funding round, key investors include former PayPal CEO Peter Thiel and Replit CEO Amjad Masad. The startup has also partnered with Taiwan Semiconductor Manufacturing Co. (TSMC) to fabricate its chips. Uberti highlighted the importance of the funding to cover the costs associated with sending chip designs to TSMC and manufacturing the chips, a process known as ‘taping out.’

While Etched did not disclose its current valuation, its $5.4-million seed-funding round in March 2023 valued the company at $34 million. The success of its specialised chip could position Etched as an important player in the AI chip market, provided transformer-based AI models continue to be prevalent in the industry.

Chinese AI companies respond to OpenAI restrictions

Chinese AI companies are swiftly responding to reports that OpenAI intends to restrict access to its technology in certain regions, including China. OpenAI, the creator of ChatGPT, is reportedly planning to block access to its API for entities in China and other countries. While ChatGPT is not directly available in mainland China, many Chinese startups have used OpenAI’s API platform to develop their applications. Users in China have received emails warning about restrictions, with measures set to take effect from 9 July.

In light of these developments, Chinese tech giants like Baidu and Alibaba Cloud are stepping in to attract users affected by OpenAI’s restrictions. Baidu announced an ‘inclusive Program,’ offering free migration to its Ernie platform for new users and additional Ernie 3.5 flagship model tokens to match their OpenAI usage. Similarly, Alibaba Cloud provides free tokens and migration services for OpenAI API users through its AI platform, offering competitive pricing compared to GPT-4.

Zhipu AI, another prominent player in China’s AI sector, has also announced a ‘Special Migration Program’ for OpenAI API users. The company emphasises its GLM model as a benchmark against OpenAI’s ecosystem, highlighting its self-developed technology for security and controllability. Over the past year, numerous Chinese companies have launched chatbots powered by their proprietary AI models, indicating a growing trend towards domestic AI development and innovation.

Italian watchdog tests AI for market oversight

Italy’s financial watchdog, Consob, has begun experimenting with AI to enhance its oversight capabilities, particularly in the initial review of listing prospectuses and the detection of insider trading. According to Consob, these AI algorithms aim to swiftly identify potential instances of insider trading, which traditionally requires significantly more time when conducted manually.

The agency reported that its AI algorithms can detect errors in just three seconds, a task typically taking a human analyst at least 20 minutes. These efforts were part of testing conducted last year using prototypes developed in collaboration with Scuola Normale Superiore University in Pisa, alongside an additional model developed independently.

Consob views the integration of AI as pivotal in enhancing the effectiveness of regulatory controls to detect financial misconduct. The next phase involves transitioning from prototype testing to fully incorporating AI into Consob’s regular operational procedures. That initiative mirrors similar efforts by financial regulators globally who are increasingly leveraging AI to bolster consumer protection and regulatory oversight.

For instance, in the United Kingdom, the Financial Conduct Authority (FCA) has utilised AI technologies to combat online scams and protect consumers. That trend underscores a broader international movement within regulatory bodies to harness AI’s potential in safeguarding market integrity and enhancing regulatory efficiency.