Affiliate marketers embrace TikTok’s latest AI feature

TikTok has unveiled a groundbreaking AI voiceover tool, empowering content creators with the ability to craft personalised voiceovers effortlessly. The new feature allows affiliates to enhance their content by aligning voiceovers with their brand’s tone, making it more engaging and authentic. The ease of use ensures even those with minimal technical skills can produce high-quality voiceovers, streamlining the content creation process.

Affiliate marketers are expected to benefit significantly from this innovation. The tool’s ability to produce custom voiceovers quickly allows marketers to focus more on strategy and less on time-consuming tasks. The AI-generated voices can be tailored to different audiences, enabling affiliates to reach a broader demographic and experiment with various accents and languages.

TikTok’s AI tool provides a cost-effective solution for those working with limited budgets, levelling the playing field between smaller affiliates and larger competitors. The enhanced engagement metrics with personalised content can lead to higher conversion rates, giving affiliates a competitive edge in the market.

As TikTok continues to innovate, staying informed and adaptable will be crucial for affiliates looking to maximise their success. Early adopters of the AI voiceover tool may find themselves ahead of the curve, reaping the benefits of increased audience engagement and improved performance metrics.

FuriosaAI unveils efficient AI inference chip

FuriosaAI has launched its latest AI inference chip, RNGD, which promises to be a significant accelerator for data centres handling large language models (LLMs) and multimodal model inference. Founded in 2017 by former AMD, Qualcomm, and Samsung engineers, FuriosaAI has rapidly developed cutting-edge technology, culminating in the RNGD chip.

The RNGD chip, developed with the support of TSMC, has demonstrated impressive performance in early tests, particularly with models such as GPT-J and Llama 3.1. The chip’s architecture, featuring a Tensor Contraction Processor (TCP) and 48GB of HBM3 memory, delivers high efficiency and programmability, achieving token throughput of 2,000 to 3,000 tokens per second for models with around 10 billion parameters.

FuriosaAI’s approach to innovation is evident in its quick development and optimisation cycles. Within weeks of receiving silicon for their first-generation chip in 2021, the company achieved notable results in MLPerf benchmarks, with performance improvements reaching 113% in subsequent submissions. The RNGD chip is the next step in their strategy, offering a sustainable solution with a lower power draw than leading GPUs.

The RNGD chip is sampled by early access customers, with a broader release anticipated in early 2025. FuriosaAI’s CEO, June Paik, expressed pride in the team’s dedication and excitement for the future as the company continues to push the boundaries of AI computing.

Elon Musk pushes for AI safety law in California

Elon Musk has urged California to pass the AI bill requiring tech companies to conduct safety testing on their AI models. Musk, who owns Tesla and the social media platform X, has long advocated for AI regulation, likening it to rules for any technology that could pose risks to the public. He specifically called for the passage of California’s SB 1047 bill to address these concerns.

California lawmakers have been busy with AI legislation, attempting to introduce 65 AI-related bills this season. These bills cover a range of issues, including ensuring algorithmic fairness and protecting intellectual property from AI exploitation. However, many of these bills have yet to advance.

On the same day, Microsoft-backed OpenAI supported a different AI bill, AB 3211, which requires companies to label AI-generated content, particularly in light of growing concerns about deepfakes and misinformation, especially in an election year.

The push for AI regulation comes when countries representing a broader portion of the global population are holding elections, raising concerns about the potential impact of AI-generated content on political processes.

OpenAI backs California bill on AI content labeling

OpenAI, the developer behind ChatGPT, is backing a new California bill, AB 3211, to ensure transparency in AI-generated content. The proposed bill would require tech companies to label content created by AI, which ranges from innocuous memes to deepfakes that could potentially mislead voters in political campaigns. The legislation has gained attention as concerns grow over the impact of AI-generated material, especially in an election year.

The bill has somewhat been overshadowed by another California AI bill, SB 1047, which mandates safety testing for AI models and has faced resistance from the tech industry, including OpenAI. This resistance highlights the complexity of regulating AI while balancing innovation and public safety.

California lawmakers have introduced 65 AI-related bills in this legislative session, covering algorithmic fairness and protecting intellectual property from AI exploitation. However, many of these proposals have yet to advance, leaving AB 3211 as one of the more prominent measures still in play.

OpenAI has expressed the importance of transparency for AI-generated content, especially during elections, advocating for measures like watermarking to help users identify the origins of what they see online. Considering that AI-generated content is a global issue, there are strong concerns that it could influence the upcoming elections in the USA and in other countries.

AB 3211 has already passed the state Assembly with unanimous support and recently cleared the Senate Appropriations Committee. The bill requires a full Senate vote before the legislative session ends on 31 August. If it passes, it will go to Governor Gavin Newsom for approval or veto by 30 September.

AI tool targets early dementia detection from brain scans

Researchers from the Universities of Edinburgh and Dundee are pioneering an AI tool designed to detect early signs of dementia through routine brain scans. Utilising a large dataset of CT and MRI scans from Scottish patients, the team aims to analyse these images with linked health records to identify patterns that may indicate a heightened risk of dementia.

The ultimate goal is to create a digital healthcare tool for radiologists to assess dementia risk during routine scans. Early identification of high-risk patients could lead to the development of more effective treatments, particularly for Alzheimer’s and vascular dementia. The project, known as SCAN-DAN, is part of a larger global collaboration called NEURii, which focuses on advancing digital health tools.

The research is supported by NEURii, which brings together international expertise and funding to overcome barriers to commercialisation. By collaborating with partners like the NHS and the Scottish National Safe Haven, the project ensures the secure handling of patient data, aiming to integrate AI tools into everyday clinical practice.

Experts believe that early diagnosis is crucial for managing dementia effectively. With costly and limited treatments, projects like SCAN-DAN offer hope for more accessible and reliable solutions. The researchers are confident that this initiative could significantly impact how dementia is diagnosed and treated.

Study reveals gaps in AI medical device validation

A recent study reveals that nearly half of AI-based medical devices approved by the US Food and Drug Administration (FDA) have not been trained on real patient data. Of 521 devices examined, 43% lacked published clinical validation, raising concerns about their effectiveness in real-world settings.

The study highlights that only 22 of these devices were validated through randomised controlled trials, considered the ‘gold standard’ for clinical testing. Some devices relied on ‘phantom images’ instead of real patient data, while others used retrospective or prospective validation methods. Researchers emphasise the importance of conducting proper clinical validation to ensure these technologies are safe and effective.

Researchers hope their findings will prompt the FDA and the medical industry to improve the credibility of AI devices by conducting and publishing clinical validation studies. They believe enhancing these processes’ transparency and rigour will significantly impact patient care.

In Australia, similar regulations exist, with the Therapeutic Goods Administration (TGA) requiring AI-based software to provide information about its training data and suitability for the Australian population. Medical devices must also meet general clinical evidence guidelines to ensure safety and effectiveness.

Recover hacked YouTube channels with Google’s AI Tool

Google has introduced a new AI-powered chat assistant to help YouTube creators recover hacked accounts. Currently, in testing, the tool is accessible to select users and aims to guide them through securing their accounts. The AI assistant will assist affected users by helping them regain control of their login details and reverse any changes made by hackers. Presently, the feature supports only the English language, but there are plans to expand its availability.

To use the new tool, users must visit the YouTube Help web page and log into their Google Account. They will then find the option to ‘Recover a hacked YouTube channel’ under the Help Centre menu. This new option opens a chat window with the AI assistant, who will guide them through securing their accounts.

Google’s latest innovation reflects its ongoing commitment to enhancing user security. Although the tool is in its early stages, efforts are being made to make it available to all YouTube creators.

As cyber threats evolve, Google’s AI assistant represents an important step forward in providing robust security solutions. The initiative shows the company’s dedication to protecting its users’ online presence.

Apple’s potential shift from Siri to AI robots

Apple is reportedly exploring generative AI to develop a new ‘personality’ for future robotic devices, potentially replacing Siri. Innovation like this could introduce a more natural and capable conversational interface in forthcoming products, echoing Amazon’s Astro. Mark Gurman from Bloomberg suggests that Apple’s tabletop robot could be priced under $1,000, though it’s still in the early stages of development with no guarantee of a release.

Apple’s broader focus on generative AI is evident in its upcoming Apple Intelligence suite, which will soon bring advanced AI features like text creation, summaries, and image generation to iPhones, iPads, and Macs. The new direction underscores the company’s commitment to next-gen AI, positioning it to compete with other tech giants already invested in the space.

Despite the potential, Apple remains cautious, with Gurman noting uncertainty about the company’s dedication to launching a home robot. As the tech world awaits the iPhone 16 launch, Apple’s AI ambitions hint at a significant shift in its approach to consumer technology.

Apple’s work on generative AI is powered by ChatGPT, highlighting the challenges before it can independently launch its AI chatbot. Whether or not Apple’s robotic ambitions materialise, the development marks a significant evolution in its AI strategy.

Islamic State exploits AI to enhance propaganda

Islamic State supporters increasingly use AI to bolster their online presence and create more sophisticated propaganda. A recent video praised a deadly attack in Russia, underscoring the evolving methods used by extremists. While AI has been part of the Islamic State’s toolkit for some time, the video’s high production quality marked a new level of sophistication.

Experts have observed a broader trend of extremist groups exploiting those tools to bypass safety controls on social media. These groups use it to generate content that mixes extremist messaging with popular culture, making it easier to reach and radicalise potential recruits. A study by the Combating Terrorism Center revealed that AI could facilitate attack planning and recruitment, with some tools already providing specific and dangerous guidance.

Why does this matter?

The misuse by extremist groups highlights the urgent need for stronger regulations. While some tech companies have developed ethical standards, concerns persist about the effectiveness of current safety measures. The rapid deployment of technologies without adequate safeguards poses a significant risk as these tools become more accessible to malicious actors.

As the debate over regulation continues, the potential for extremist groups to exploit this technology grows. Experts warn that without more robust oversight, AI could become a powerful tool in the hands of those seeking to spread violence and extremism.

Former Meta executive joins OpenAI to lead key initiatives

OpenAI has appointed a former Meta executive, Irina Kofman, as head of strategic initiatives. The recruiting of the new entry follows a series of high-profile hires from major tech firms as OpenAI expands. Kofman, who worked on generative AI for five years at Meta, will report directly to Mira Murati, OpenAI’s chief technology officer.

Kofman’s role at OpenAI will involve addressing critical areas such as AI safety and preparedness. Her appointment is part of a broader strategy by OpenAI to bring in seasoned professionals to navigate the competitive landscape, which includes rivals like Google and Meta.

In recent months, OpenAI has also brought in other prominent figures from the tech industry. These include Kevin Weil, a former Instagram executive now serving as chief product officer, and Sarah Friar, the former CEO of Nextdoor, who has taken on the role of chief financial officer.

Meta has yet to comment on Kofman’s departure. The company increasingly relies on AI to enhance its advertising business, using the technology to optimise ad placements and provide marketers with tools for better campaign design.