Microsoft details threat from new AI jailbreaking method

Microsoft has warned about a new jailbreaking technique called Skeleton Key, which can prompt AI models to disclose harmful information by bypassing their behavioural guidelines. Detailed in a report published on 26 June, Microsoft explained that Skeleton Key forces AI models to respond to illicit requests by modifying their behavioural guidelines to provide a warning rather than refusing the request outright. A technique like this, called Explicit: forced instruction-following, can lead models to produce harmful content.

The report highlighted an example where a model was manipulated to provide instructions for making a Molotov cocktail under the guise of an educational context. The prompt allowed the model to deliver the information with only a prefixed warning by instructing the model to update its behaviour. Microsoft tested the Skeleton Key technique between April and May 2024 on various AI models, including Meta LLama3-70b, Google Gemini Pro, and GPT 3.5 and 4.0, finding it effective but noting that attackers need legitimate access to the models.

Microsoft has addressed the issue in its Azure AI-managed models using prompt shields and has shared its findings with other AI providers. The company has also updated its AI offerings, including its Copilot AI assistants, to prevent guardrail bypassing. Furthermore, the latest disclosure underscores the growing problem of generative AI models being exploited for malicious purposes, following similar warnings from other researchers about vulnerabilities in AI models.

Why does it matter?

In April 2024, Anthropic researchers discovered a technique that could force AI models to provide instructions for constructing explosives. Earlier this year, researchers at Brown University found that translating malicious queries into low-resource languages could induce prohibited behaviour in OpenAI’s GPT-4. These findings highlight the ongoing challenges in ensuring the safe and responsible use of advanced AI models.

Adobe India hiring for generative AI research roles

Adobe is expanding its generative AI team in India, seeking researchers skilled in NLP, LLMs, computer vision, deep learning, and more. With approximately 7,000 employees already in India, Adobe aims to bolster its research capabilities across various AI domains. Candidates will innovate and prototype AI technologies, contributing to Adobe’s products, publishing research, and collaborating globally.

Successful applicants are expected to demonstrate research excellence and a robust publication history, with backgrounds in computer science, electrical engineering, or mathematics. Senior roles require a minimum of seven years’ research experience, coupled with strong problem-solving abilities and analytical skills. Adobe prioritises integrating generative AI across its Experience Cloud, Creative Cloud, and Document Cloud, aiming to enhance content workflows and customer interactions.

Adobe’s foray into generative AI began with Adobe Firefly in collaboration with NVIDIA in March 2023. The company recently integrated third-party AI tools such as OpenAI’s Sora into Premiere Pro, offering users flexibility in AI model selection.

By partnering with AI providers like OpenAI, RunwayML, and Pika, Adobe continues to innovate, enabling personalised and efficient content creation workflows for enterprise customers.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid technological advancements do not come at the expense of human employment.

AI tool lets YouTube creators erase copyrighted songs

YouTube has introduced an updated eraser tool that allows creators to remove copyrighted music from their videos without affecting speech, sound effects, or other audio. Launched on 4 July, the tool uses an AI-powered algorithm to target only the copyrighted music, leaving the rest of the video intact.

Previously, videos flagged for copyrighted audio faced muting or removal. However, YouTube cautions that the tool might only be effective if the song is easy to isolate.

YouTube chief Neal Mohan announced the launch on X, explaining that the company had been testing the tool for some time but struggled to remove copyrighted tracks accurately. The new AI algorithm represents a significant improvement, allowing users to mute all sound or erase the music in their videos. Advancements like this are part of YouTube’s broader efforts to leverage AI technology to enhance user experience and compliance with copyright laws.

In addition to the eraser tool, YouTube is making strides in AI-driven music licensing. The company has been negotiating with major record labels to roll out AI music licensing deals, aiming to use AI to create music and potentially offer AI voice imitations of famous artists. Following the launch of YouTube’s AI tool Dream Track last year, which allowed users to create music with AI-generated voices of well-known singers, YouTube continues to engage with major labels like Sony, Warner, and Universal to expand the use of AI in music creation and licensing.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

Waymo self-driving car runs red light raising safety concerns

A police officer pulled over a self-driving Waymo vehicle in Phoenix on 19 June after it ran a red light and entered a lane of oncoming traffic. Local media, including the Arizona Republic, released bodycam footage showing the vehicle weaving through traffic before stopping in a parking lot. The officer approached the empty car and found he could not issue a citation to the computer.

Waymo stated that the vehicle encountered inconsistent construction signage, causing it to enter the oncoming lane for about 30 seconds. The incident lasted approximately one minute, and no passengers were in the car.

The incident follows two voluntary software recalls by Waymo earlier this year after crashes, and the company’s software safety is currently under investigation by federal regulators.

Morgan Freeman responds to AI voice scam on TikTok

Actor Morgan Freeman, renowned for his distinctive voice, recently addressed concerns over a video circulating on TikTok featuring a voice purportedly his own but created using AI. The video, depicting a day in his niece’s life, prompted Freeman to emphasise the importance of reporting unauthorised AI usage. He thanked his fans on social media for their vigilance in maintaining authenticity and integrity, underscoring the need to protect against such deceptive practices.

This isn’t the first time Freeman has encountered unauthorised use of his likeness. Previously, his production company’s EVP, Lori McCreary, encountered deepfake videos attempting to mimic Freeman, including one falsely depicting him firing her. Such incidents highlight the growing prevalence of AI-generated content, prompting discussions about its ethical implications and the need for heightened awareness.

Freeman’s case joins a broader trend of celebrities, from Taylor Swift to Tom Cruise, facing similar challenges with AI-generated deepfakes. These instances underscore ongoing concerns about digital identity theft and the blurred lines between real and fabricated content in the digital age.

How AI is reshaping US intelligence operations

The US intelligence community is fully embracing generative AI, marking a significant shift towards transparency in its adoption of cutting-edge technology. Leaders within agencies like the CIA are openly discussing how generative AI enhances intelligence operations, from aiding in content triage and search capabilities to supporting analysts in generating counter arguments and ideation.

Lakshmi Raman, the CIA’s director of Artificial Intelligence Innovation, highlighted the transformative impact of generative AI during a recent address at the Amazon Web Services Summit in Washington, D.C. She noted its critical role in processing vast amounts of data to extract actionable insights, crucial for keeping pace with global developments and informing policymakers amidst a constant influx of news.

Despite its potential benefits, the deployment of generative AI within the intelligence community is not without its challenges and risks. Concerns over accuracy and security persist, as erroneous outputs—termed ‘hallucinations’—could have severe consequences in national security contexts. Adele Merritt, Intelligence Community Chief Information Officer, stressed the need for cautious adoption, ensuring that AI technologies adhere to strict privacy and security standards.

In response to these challenges, major tech companies like Microsoft and AWS are adapting their cloud services to cater to classified government needs, offering secure environments for deploying generative AI tools. AWS, for instance, launched a significant initiative to support government agencies with training and technical support for generative AI, underscoring its commitment to enhancing national security capabilities through innovative technology solutions.

However, this concerted effort by both intelligence agencies and tech providers aims to harness the full potential of generative AI while mitigating associated risks, thus shaping the future of intelligence operations in an increasingly data-driven world.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

Chinese firms confident at World AI Conference despite US sanctions

Chinese tech companies, from industry giants to ambitious startups, converged at the World AI Conference in Shanghai to showcase their latest innovations and express strong support for the country’s AI sector despite US sanctions. Over 150 AI-related products and solutions are being exhibited, with notable foreign firms like Tesla and Qualcomm also participating. SenseTime, previously known for facial recognition, unveiled its most advanced large language model, SenseNova 5.5, positioning it as a rival to OpenAI’s GPT-4.

Despite challenges posed by US sanctions limiting access to advanced chips, executives at the conference expressed confidence in China’s AI sector’s resilience. Zhang Ping’an, head of Huawei’s cloud computing unit, emphasised the need to innovate in cloud computing to overcome chip shortages. Similarly, Liu Qingfeng, chairman of Iflytek, highlighted that Chinese-developed large language models could rival global standards, stressing the importance of having independently developed and controlled AI technologies.

Robin Li, CEO of Baidu, urged the AI industry to focus on practical applications rather than just developing large language models, which require significant computing power and AI chips. Li stressed that foundational models, whether open-source or closed-source, only hold value with applications. Such a sentiment was echoed by other industry leaders, emphasising the need for innovation and practical use cases in AI development.

Meet Build-A-Brain: Tailored AI for every industry

A new platform, Build-A-Brain, aims to democratise access to generative AI, offering users control over data and AI engines unlike public tools. Founder Howard Jones emphasises its tailored AI products for businesses, harnessing proprietary data to enhance decision-making processes.

The platform supports project management akin to SharePoint, featuring an Articles Wizard for content generation, including images.

Build-A-Brain stands out with its Custom AI Brain feature, enabling users to interact securely with their own data. It finds applications across diverse sectors like healthcare and finance, offering bespoke AI solutions that drive innovation. Businesses can integrate their brand identity and utilise additional tools like audio transcription and file conversion, enhancing workflow efficiency.

Jones highlights the platform’s future potential in facilitating comprehensive workflows, combining content creation with data interrogation and collaboration tools.

Build-A-Brain operates on a freemium model, allowing free access with upgrade options, supported by user-friendly tutorials on its website. Explore more at virinity.ai to unlock AI-driven insights tailored to your organisation’s needs.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

Realme teams up with Sony for AI-imaging 5G smartphone

Realme has unveiled plans to integrate Sony’s cutting-edge LYT-701 camera sensor into its upcoming 5G smartphone, marking a significant leap into AI-enhanced imaging technology. The announcement, made at a pre-launch event in Bangkok, underscores Realme’s strategic partnership with Sony to elevate mobile photography capabilities.

Francis Wong, Head of Product Marketing at Realme, highlighted the shift from traditional hardware-centric advancements to AI-driven innovations in mobile photography. He emphasised that while past improvements focused on megapixels and sensor sizes, future progress hinges on AI to redefine the mobile imaging experience.

The Realme 13 Pro Series 5G will feature the HYPERIMAGE+ technology, integrating multiple lenses and a 50MP periscope telephoto camera powered by Sony’s LYT-600 sensor. This setup promises to deliver superior image quality and unprecedented flexibility for users capturing diverse scenes.

The collaboration aims not only to advance technological capabilities but also to democratise advanced imaging tools, enabling users worldwide to capture and share their experiences in unprecedented detail. Realme plans to announce the official launch dates for the device in India and other markets soon.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

AI demand boosts Samsung profits

Samsung Electronics reported a significant surge in its second-quarter operating profit, driven by rising semiconductor prices amid booming demand for AI. The company’s operating profit is estimated to have increased more than 15-fold to 10.4 trillion won ($7.54 billion) from 670 billion won a year earlier, surpassing analysts’ expectations. The surge marks Samsung’s most profitable quarter since Q3 2022, primarily due to higher chip prices and a reversal of previous inventory writedowns.

The company’s revenue likely increased by 23% to 74 trillion won in the second quarter compared to last year’s period. Samsung’s semiconductor division posted its second consecutive quarterly profit as prices for memory chips, particularly high-end DRAM and NAND Flash chips used in AI applications, saw significant increases. According to TrendForce, DRAM and NAND Flash chip prices jumped 13% to 20% from the previous quarter.

However, analysts expect the price increases for memory chips to slow down in the third quarter, with only a 5% to 10% rise forecasted for conventional DRAM and NAND Flash chips. Despite the solid AI-driven demand for high-end chips, Samsung needs to catch up with its rival, SK Hynix, in supplying these advanced chips to major clients like Nvidia. Investors are keenly awaiting Samsung’s outlook on legacy chips and Nvidia’s approval of its latest HBM chips after previous heat and power consumption issues.