OnlyFans, a platform known for offering subscribers ‘authentic relationships’ with content creators, faces scrutiny over the use of AI chatbots impersonating performers. Some management agencies employ AI software to sext with subscribers, bypassing the need for human interaction. NEO Agency, for example, uses a chatbot called FlirtFlow to create what it claims are ‘genuine and meaningful’ connections, although OnlyFans’ terms of service prohibit such use of AI.
Despite these rules, chatbots are prevalent. NEO Agency manages about 70 creators, with half using FlirtFlow. The AI engages subscribers in small talk to gather personal information, aiming to extract more money. While effective for high-traffic accounts, human chatters are still preferred for more personalised interactions, especially in niche erotic categories.
Similarly, Australian company Botly offers software that generates responses for OnlyFans messages, which a human can then send. Botly claims its technology is used in over 100,000 chats per month. Such practices raise concerns about transparency and authenticity on platforms that promise direct interactions with creators.
The issue coincides with broader discussions on online safety. The UK recently amended its Online Safety Bill to combat deepfakes and revenge porn, highlighting the rising threat of deceptive digital practices. Meanwhile, other platforms like X (formerly Twitter) have officially allowed adult content, increasing the complexity of managing online safety and authenticity.
OpenAI has assured US lawmakers it is committed to safely deploying its AI tools. The ChatGPT’s owner decided to address US officials after concerns were raised by five senators, including Senator Brian Schatz of Hawaii, regarding the company’s safety practices. In response, OpenAI’s Chief Strategy Officer, Jason Kwon, emphasised the company’s mission to ensure AI benefits all of humanity and highlighted the rigorous safety protocols they implement at every stage of their process.
a few quick updates about safety at openai:
as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.
our team has been working with the US AI Safety Institute on an agreement where we would provide…
Over multiple years, OpenAI pledged to allocate 20% of its computing resources to safety-related research. The company also stated that it would no longer enforce non-disparagement agreements for current and former employees, addressing concerns about previously restrictive policies. On social media, OpenAI’s CEO, Sam Altman, shared that the company is collaborating with the US AI Safety Institute to provide early access to their next foundation model to advance AI evaluation science.
Kwon mentioned the recent establishment of a safety and security committee, which is currently reviewing OpenAI’s processes and policies. The review is part of a broader effort to address the controversies OpenAI has faced regarding its commitment to safety and the ability of employees to voice their concerns.
Recent resignations from key members of OpenAI’s safety teams, including co-founders Ilya Sutskever and Jan Leike, have highlighted internal concerns. Leike, in particular, has publicly criticised the company for prioritising product development over safety, underscoring the ongoing debate within the organisation about its approach to balancing innovation with security.
Intel will be laying off thousands of workers in an effort to finance its recovery amidst its plummeting revenues and market share. While the US chipmaker is one of the dominant players in the personal computer market, it still hasn’t caught pace with the growing AI chip demand.
Intel’s CEO, Pat Gelsinger, has initiated huge investments in enhancing manufacturing capabilities and improving the company’s tech capabilities. Traditionally focused on designing and producing its chips, Intel will now strive to enter the foundry business to manufacture chips for other companies as well.
Why does this matter?
Intel’s push for innovation is vital at this juncture, where, despite the recent increase in the importance of semiconductors driven by the AI revolution, Intel’s dominance in the semiconductor industry has waned. With competitors like NVIDIA, TSMC, Qualcomm, and MediaTek emerging as industry frontrunners, Intel’s slashing of cost is a bid to reclaim its industry market position.
A formal complaint has been filed with the Agency for Access to Public Information (AAIP) of Argentina against Meta, the parent company of Facebook, WhatsApp and Instagram. The case is in line with the international context of increasing scrutiny on the data protection practices of large technology companies.
The presentation was made by lawyers specialising in personal data protection, Facundo Malaureille and Daniel Monastersky, directors of the Diploma in Data Governance at the CEMA University. The complaint signals the company’s use of personal data for AI training.
The presentation consists of 22 points and requests that Meta Argentina explain its practices for collecting and using personal data for AI training. The AAIP will evaluate and respond to this presentation as the enforcement authority of the Personal Data Protection Law of Argentina (Law 25,326).
The country’s technological and legal community is closely watching the development of this case, given that the outcome of this complaint could impact innovation in AI and the protection of personal data in Argentina in the coming years.
Microsoft plans to increase its spending on AI infrastructure this fiscal year despite slower growth in its cloud business. This announcement led to a 4% drop in its share price after an initial 7% decline. The tech giant, along with others like Google, is investing heavily in data centres to leverage the AI boom, with Microsoft’s capital spending rising 77.6% to $19 billion in its fiscal fourth quarter, primarily for cloud and AI-related expenses.
Despite these investments, investors were disappointed with the slower growth of Microsoft’s Azure cloud service. The company forecasted a 28% to 29% growth for Azure in the upcoming quarter, slightly below market expectations, which followed a 29% increase in the previous quarter, but it also fell short of estimates, indicating a slowdown from earlier months.
CEO Satya Nadella highlighted that AI services are becoming a significant part of Azure’s revenue growth, with over 60,000 customers using Azure AI, a nearly 60% increase from the previous year. Microsoft has integrated AI across its products, including its search engine Bing and productivity tools like Word, driven by its substantial investment in OpenAI.
Microsoft’s total revenue rose 15% to $64.7 billion in the fourth quarter, exceeding analyst expectations. The company also grew in its personal computing business, benefiting from stabilising PC sales. However, revenue from its Intelligent Cloud unit, which includes Azure, missed analyst estimates, rising 19% to $28.5 billion.
Brazil has announced a 23 billion reais ($4.07 billion) investment plan for AI development. The initiative aims to foster sustainable and socially-oriented technologies within the nation, enhancing its technological autonomy and competitiveness in the global AI market.
The investment plan includes immediate impact initiatives targeting key sectors such as public health, agriculture, environment, business, and education. These initiatives focus on developing AI systems to streamline customer service and operational procedures.
A significant portion of the funds, nearly 14 billion reais, will be allocated to business innovation projects over the next four years. More than 5 billion reais will be invested in AI infrastructure and development, with the remaining resources dedicated to training, public service improvements, and AI regulation support.
President Luiz Inácio Lula da Silva emphasised the importance of Brazil developing its own AI technologies rather than relying on imports. He highlighted the potential of AI to generate income and employment within the country.
OpenAI has begun rolling out an advanced voice mode to a select group of ChatGPT Plus users, according to a post on X by the Microsoft-backed AI startup. Initially slated for a late June release, the launch was delayed to July to ensure the new feature met the company’s standards. This voice mode enables users to engage in real-time conversations with ChatGPT, including the ability to interrupt the AI while it is speaking, enhancing the realism of interactions.
The new audio capabilities address a common challenge for AI assistants, making conversations more fluid and responsive. In preparation for this release, OpenAI has been refining the model’s ability to detect and reject certain types of content while also enhancing the overall user experience and ensuring its infrastructure can support the new feature at scale.
We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions. pic.twitter.com/64O94EhhXK
The following development is part of OpenAI’s broader strategy to introduce innovative generative AI products, as the company aims to stay ahead in the competitive AI market. Businesses are rapidly adopting AI technology, and OpenAI’s efforts to improve and expand its offerings are crucial to maintaining its leadership position in this fast-growing field.
Apple Inc has joined US President Joe Biden’s voluntary commitments to govern artificial intelligence, aimed at preventing the misuse of AI technology. The White House announced on Friday that Apple is now part of a group of 15 firms that have committed to ensuring AI’s power is not used for harmful purposes. The original commitments, introduced in July 2023, were initially signed by companies such as Google and Microsoft’s partner OpenAI.
In September, additional firms including Adobe, IBM, and Nvidia also pledged their support. This initiative is part of a broader effort by the Biden administration to promote responsible AI innovation by assembling an AI expert team, urging tech CEOs to adopt measures that prevent AI from being used destructively.
Apple’s participation comes amid its own challenges with AI, as the company recently delayed AI features for iOS and iPadOS. This commitment underscores the importance of a unified approach among major tech companies to address the ethical and safety concerns surrounding AI.
The Biden administration is set to introduce a new rule expanding US powers to block exports of semiconductor manufacturing equipment to Chinese chipmakers. However, essential allies like Japan, the Netherlands, and South Korea will be exempt, minimising the rule’s overall impact. The additional restriction follows previous export controls aimed at hindering China’s advancements in supercomputing and AI for military purposes.
The new rule will extend the Foreign Direct Product rule, preventing several Chinese semiconductor factories from receiving equipment exports from countries such as Israel, Taiwan, Singapore, and Malaysia. The rule, which has previously targeted Huawei, allows the US to block sales of products made with American technology, even if produced abroad. The exemptions highlight a diplomatic effort to maintain international cooperation while enforcing export controls.
Additionally, the US plans to tighten regulations by reducing the threshold of US content in foreign products subject to export controls.
The rule, still in draft form, is expected to be finalised next month.
While ASML and Tokyo Electron shares surged in response to the exemptions, this development underscores the need for a balanced approach to managing export controls while maintaining solid international alliances.
Shengshu AI, a Chinese start-up, has launched its new text-to-video tool, Vidu, for global users. The tool supports both Chinese and English text prompts, making it accessible to a wider audience. Users can generate clips of four or eight seconds through the official website. This development places Shengshu among other firms offering similar services, such as Zhipu AI and Kuaishou Technology.
Vidu, first unveiled in April, can generate a four-second clip in just 30 seconds, making it one of the fastest tools available. The technology is based on Shengshu’s self-developed architecture, U-ViT, which was detailed in a research paper by a team led by Zhu Jun, the company’s chief scientist and a professor at Tsinghua University. Shengshu’s leadership team includes several Tsinghua alumni, highlighting the university’s significant role in China’s AI ambitions.
The tool also features a new character-to-video function, allowing users to animate real or fictional characters using simple text prompts. This capability lays the groundwork for potential commercial applications in the animation and content industries. Zhang Xudong, Shengshu’s product director, envisions future developments where users can animate multiple characters and scenes, integrating AI tools with traditional sectors.
Shengshu has attracted significant investment, securing tens of millions of US dollars from backers like Qiming Venture Partners, Baidu, Alibaba’s Ant Group, and the Beijing AI Industry Investment Fund. This financial support underscores the confidence in Shengshu’s potential to lead in AI video generation, positioning it as a strong competitor to OpenAI’s Sora.