Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities

Apple, Microsoft, and Google are spearheading a technological revolution with their vision of AI smartphones and computers. These advanced devices aim to automate tasks like photo editing and sending birthday wishes, promising a seamless user experience. However, to achieve this level of functionality, these tech giants are seeking increased access to user data.

In this evolving landscape, users are confronted with the decision of whether to share more personal information. Windows computers may capture screenshots of user activities, iPhones could aggregate data from various apps, and Android phones might analyse calls in real time to detect potential scams. The shift towards data-intensive operations raises concerns about privacy and security, as companies require deeper insights into user behaviour to deliver tailored services.

The emergence of OpenAI’s ChatGPT has catalysed a transformation in the tech industry, prompting major players like Apple, Google, and Microsoft to revamp their strategies and invest heavily in AI-driven services. The focus is on creating a dynamic computing interface that continuously learns from user interactions to provide proactive assistance, an essential strategy for the future. While the potential benefits of AI integration are substantial, inherent security risks are associated with the increased reliance on cloud computing and data processing. As AI algorithms demand more computational power, sensitive personal data may need to be transmitted to external servers for analysis. The data transfer to the cloud introduces vulnerabilities, potentially exposing user information to unauthorised access by third parties.

Against this backdrop, tech companies have emphasised their commitment to safeguarding user data, implementing encryption and stringent protocols to protect privacy. As users navigate this evolving landscape of AI-driven technologies, understanding the implications of data sharing and the mechanisms employed to protect privacy is crucial. Apple, Microsoft, and Google are at the forefront of integrating AI into their products and services, each with a unique data privacy and security approach. Apple, for instance, unveiled Apple Intelligence, a suite of AI services integrated into its devices, promising enhanced functionalities like object removal from photos and intelligent text responses. Apple is also revamping its voice assistant, Siri, to enhance its conversational abilities and provide it with access to data from various applications.

The company aims to process AI data locally to minimise external exposure, with stringent measures in place to secure data transmitted to servers. Apple’s commitment to protecting user data differentiates it from other companies that retain data on their servers. However, concerns have been raised about the lack of transparency regarding Siri requests sent to Apple’s servers. Security researcher Matthew Green argued that there are inherent security risks to any data leaving a user’s device for processing in the cloud.

Microsoft has introduced AI-powered features in its new Windows computers called Copilot+ PC, ensuring data privacy and security through a new chip and other technologies. The Recall system enables users to quickly retrieve documents and files by typing casual phrases, with the computer taking screenshots every five seconds for analysis directly on the PC. While Recall offers enhanced functionality, security researchers caution about potential risks if the data is hacked. Google has also unveiled a suite of AI services, including a scam detector for phone calls and an ‘Ask Photos’ feature. The scam detector operates on the phone without Google listening to calls, enhancing user security. However, concerns have been raised about the transparency of Google’s approach to AI privacy, particularly regarding the storage and potential use of personal data for improving its services.

Why does it matter?

As these tech giants continue to innovate with AI technologies, users must weigh the benefits of enhanced functionalities against potential privacy and security risks associated with data processing and storage in the cloud. Understanding how companies handle user data and ensuring transparency in data practices are essential for maintaining control over personal information in the digital age.

Toys ‘R’ Us pioneers text to video AI-generated advertising clip

Company founder Charles Lazarus showcased Toys ‘R’ Us AI-generated short promotional clip at the 2024 Cannes Lions Festival in France. In an effort to regenerate interest in the brand, Lazarus collaborated with the creative agency Native Foreign using OpenAI’s Sora. Except for a few tweaks that require human input the 66-seconds video was created entirely by Sora. 

Sora is a diffusion model capable of three modes of video creation. In addition to using text prompts to create videos, it is capable of extending, modifying and filling in missing frames in an existing video. For instance, it can take a video with static noise and gradually transform it by removing the noise over many steps. The AI model can also animate elements in an image.

Company CMO, Kim Miller, informed festival attendees that despite the film’s AI base, production required human involvement throughout the process, including a dependence on intuition. The film received mixed reviews on social media, with some even calling it creepy.

Industry leaders unite for ethical AI data practices

Several companies that license music, images, videos, and other datasets for training AI systems have formed the first trade group in the sector, the Dataset Providers Alliance (DPA). The founding members of the DPA include Rightsify, vAIsual, Pixta, and Datarade. The group aims to advocate for ethical data sourcing, including protecting intellectual property rights and ensuring rights for individuals depicted in datasets.

The rise of generative AI technologies has led to backlash from content creators and numerous copyright lawsuits against major tech companies like Google, Meta, and OpenAI. Developers often train AI models using vast amounts of content, much of which is scraped from the internet without permission. To address these issues, the DPA will establish ethical standards for data transactions, ensuring that members do not sell data obtained without explicit consent. The alliance will also push for legislative measures in the NO FAKES Act, penalising unauthorised digital replicas of voices or likenesses and supporting transparency requirements in AI training data.

The DPA plans to release a white paper in July outlining its positions and advocating for these standards and legislative changes to ensure ethical practices in AI data sourcing and usage.

Qatar’s tech hub expansion accelerates as AI boosts its growth

Qatar is rapidly advancing in technology, positioning itself as a global tech hub. The President of Qatar Science and Technology Park (QSTP), Dr Jack Lau, highlighted the role of AI in boosting the Qatari and GCC markets, emphasising the need for region-specific, tailored solutions.

AI applications like ChatGPT are well-researched in Qatar. However, optimization for different languages, increased speed, and more accurate responses have yet to be implemented.

Dr Lau noted the satisfaction with emerging AI tools, particularly in translating and customising presentation content. He stressed the importance of cultural sensitivity and corporate-specific needs in AI applications while ensuring data privacy and security, underscoring also that these technologies have significant potential for perfection and further development.

QSTP is crucial in supporting Qatar’s national vision of talent transformation through education, innovation, and entrepreneurship. The organisation is exploring opportunities for individuals with the right educational background to contribute significantly to AI, robotics, medical sciences, and sustainable farming.

Malaysia seeks Chinese investment for data centres

Malaysia is in talks with potential Chinese investors about building data centres as part of its strategy to attract high-quality investments, according to Economy Minister Rafizi Ramli. Rafizi stated in an interview that the government aims to enhance its infrastructure to leverage the AI boom.

In the past year, major US tech companies such as Microsoft, Google, and Nvidia have announced plans to establish data centres in Malaysia. Chinese firms are also interested in developing additional facilities to support local tech companies seeking to expand into the Southeast Asian market, Rafizi noted.

Rafizi emphasised Malaysia’s goal to expedite its transition from the back end to the front end of the semiconductor industry, focusing on integrated circuit design and data centres. He clarified that discussions with Chinese companies have not included using their Malaysian operations to bypass US tariffs.

OpenAI delays ChatGPT voice features amidst safety concerns and legal threats

OpenAI has announced a delay in launching new voice and emotion-reading features for its ChatGPT chatbot, citing the need for more safety testing. Originally set to be available to some paying subscribers in late June, these features will be rolled out in the fall.

The postponement follows a demonstration last month that garnered user excitement and sparked controversy, including a potential lawsuit from actress Scarlett Johansson, who claimed her voice was mimicked for an AI persona.

OpenAI’s demo showcased the chatbot’s ability to speak in synthetic voices and respond to users’ tones and expressions, with one voice resembling Johansson’s role in the movie ‘Her.’ However, CEO Sam Altman denied using Johansson’s voice, clarifying that a different actor was used for training. The company aims to ensure the new features meet high safety and reliability standards before release.

The delay highlights ongoing challenges in the AI industry. Companies like Google and Microsoft have faced similar setbacks, dealing with errors and controversial outputs from their AI tools.

OpenAI emphasised the complexity of designing chatbots that interpret and mimic emotions, which can introduce new risks and potential for misuse. Additionally, the competition in AI industry is growing swiftly to satisfy the demand of a more and more demanding market and customer field. However, the company seems to be committed to releasing these advanced features thoughtfully and safely.

CLTR urges UK government to create formal system for managing AI misuse and malfunctions

The UK should implement a system to log misuse and malfunctions in AI to keep ministers informed of alarming incidents, according to a report by the Centre for Long-Term Resilience (CLTR). The think tank, which focuses on responses to unforeseen crises, urges the next government to establish a central hub for recording AI-related episodes across the country, similar to the Air Accidents Investigation Branch.

CLTR highlights that since 2014, news outlets have recorded 10,000 AI ‘safety incidents,’ documented in a database by the Organisation for Economic Co-operation and Development (OECD). These incidents range from physical harm to economic, reputational, and psychological damage. Examples include a deepfake of Labour leader Keir Starmer and Google’s Gemini model depicting World War II soldiers inaccurately. The report’s author, Tommy Shaffer Shane, stresses that incident reporting has been transformative in aviation and medicine but is largely missing in AI regulation.

The think tank recommends the UK government adopt a robust incident reporting regime to manage AI risks effectively. It suggests following the safety protocols of industries like aviation and medicine, as many AI incidents may go unnoticed due to the lack of a dedicated AI regulator. Labour has pledged to introduce binding regulations for advanced AI companies, and CLTR emphasises that such a setup would help the government anticipate and respond quickly to AI-related issues.

Additionally, CLTR advises creating a pilot AI incident database, which could collect episodes from existing bodies such as the Air Accidents Investigation Branch and the Information Commissioner’s Office. The think tank also calls for UK regulators to identify gaps in AI incident reporting and build on the algorithmic transparency reporting standard already in place. An effective incident reporting system would help the Department for Science, Innovation and Technology (DSIT) stay informed and address novel AI-related harms proactively.

Experts join Regulating AI’s new advisory board

Regulating AI, a non-profit organisation dedicated to promoting AI governance, has announced its advisory board’s formation. Board members include notable figures such as former US Senator Cory Gardner, former Bolivian President Jorge Quiroga, and former Finnish Prime Minister Esko Aho. The board aims to foster a sustainable AI ecosystem that benefits humanity while addressing potential risks and ethical concerns.

The founder of Regulating AI, Sanjay Puri, expressed his excitement about the diverse expertise and perspectives the new board members bring. He emphasised the importance of their wisdom in navigating the complexities of the rapidly evolving AI landscape and shaping policies that balance innovation with ethical considerations and societal well-being.

One of the organisation’s key initiatives is developing a comprehensive AI governance framework. That includes promoting international cooperation, advocating for diverse voices, and exploring sector-specific AI implications. Former President of Bolivia Jorge Quiroga highlighted the transformational power of AI and the need for effective regulation that considers the unique challenges of developing nations.

Regulating AI aims to build public trust, align international standards, and empower various stakeholders through its board. Former US Senator Gardner underscored the necessity of robust regulatory frameworks to ensure AI is developed and deployed responsibly, protecting consumer privacy, preventing algorithmic bias, and upholding democratic values. The organisation also seeks to educate and raise awareness about AI regulations, fostering discussions among experts and policymakers to advance understanding and implementation.

Meta launches AI chatbot in India, rivaling Google’s Gemini

Meta has officially introduced its AI chatbot, powered by Llama 3, to all users in India following comprehensive testing during the country’s general elections. Initially trialled on WhatsApp, Instagram, Messenger, and Facebook since April, the chatbot is now fully accessible through the search bars in these apps and the Meta.AI website. Despite supporting only English, its functionalities are on par with other major AI services like ChatGPT and Google’s Gemini, including tasks such as suggesting recipes, planning workouts, writing emails, summarising text, recommending Instagram Reels, and answering questions about Facebook posts.

The launch aims to capitalise on India’s vast user base, notably the 500 million WhatsApp users, by embedding the chatbot deeper into the user experience. However, some limitations have been observed, such as the chatbot’s inability to fully understand the context of group conversations, except in direct mentions or replies. Moreover, while it cannot be disabled, users can choose not to interact with it during searches.

Despite its capabilities, Meta AI has faced criticisms for biases in its image generation, often depicting Indian men with turbans and producing images of traditional Indian houses, which Meta has acknowledged and aims to address through ongoing updates. The launch coincides with Google releasing its Gemini app in India, which, unlike Meta’s chatbot, supports multiple local languages, potentially giving Google a competitive advantage in the linguistically diverse Indian market.

Why does it matter?

In summary, Meta’s rollout of its English-only AI chatbot in India is a strategic effort to leverage its extensive user base by offering robust functionalities similar to established competitors. While it faces initial limitations and biases, Meta is actively working on improvements. The concurrent release of Google’s Gemini app sets up a competitive landscape, underscoring the dynamic and evolving nature of AI services in India.

Central banks urged to embrace AI

The Bank for International Settlements (BIS) has advised central banks to harness the benefits of AI while cautioning against its use in replacing human decision-makers. In its first comprehensive report on AI, the BIS highlighted the technology’s potential to enhance real-time data monitoring and improve inflation predictions – capabilities that have become critical following the unforeseen inflation surges during the COVID-19 pandemic and the Ukraine crisis. While AI models could mitigate future risks, their unproven and sometimes inaccurate nature makes them unsuitable as autonomous rate setters, emphasised Cecilia Skingsley of the BIS. Human accountability remains crucial for decisions on borrowing costs, she noted.

The BIS, often termed the central bank for central banks, is already engaged in eight AI-focused projects to explore the technology’s potential. Hyun Song Shin, the BIS’s head of research, stressed that AI should not be seen as a ‘magical’ solution but acknowledged its value in detecting financial system vulnerabilities. However, he also warned of the risks associated with AI, such as new cyber threats and the possibility of exacerbating financial crises if mismanaged.

The widespread adoption of AI could significantly impact labour markets, productivity, and economic growth, with firms potentially adjusting prices more swiftly in response to economic changes, thereby influencing inflation. The BIS has called for the creation of a collaborative community of central banks to share experiences, best practices, and data to navigate the complexities and opportunities presented by AI. That collaboration aims to ensure AI’s integration into financial systems is both effective and secure, promoting resilient and responsive economic governance.

In conclusion, the BIS’s advisory underscores the importance of balancing AI’s promising capabilities with the necessity for human intervention in central banking operations. By fostering an environment for shared knowledge and collaboration among central banks, the BIS seeks to maximise AI benefits while mitigating inherent risks, thereby supporting more robust economic management in the face of technological advancements.