AI startup Sierra hits $4.5 billion valuation

Sierra, a young AI software startup co-founded by former Salesforce co-CEO Bret Taylor, has secured $175 million in new funding led by Greenoaks Capital. This latest round gives the company a valuation of $4.5 billion, a significant jump from its earlier valuation of nearly $1 billion. Investors such as Thrive Capital, Iconiq, Sequoia, and Benchmark have also backed the firm.

Founded just a year ago, Sierra has already crossed $20 million in annualised revenue, focusing on selling AI-powered customer service chatbots to enterprises. It works with major clients, including WeightWatchers and Sirius XM. The company claims its technology reduces ‘hallucinations’ in large language models, ensuring reliable AI interactions for businesses.

The rising valuation reflects investor enthusiasm for applications in AI that generate steady revenue, shifting from expensive foundational models to enterprise solutions. Sierra operates in a competitive space, facing rivals such as Salesforce and Forethought, but aims to stand out through more dependable AI performance.

Bret Taylor, who also chairs OpenAI’s board, co-founded Sierra alongside former Google executive Clay Bavor. Taylor previously held leadership roles at Salesforce and oversaw Twitter’s board during its takeover by Elon Musk. Bavor, who joined Google in 2005, played key roles managing Gmail and Google Drive.

Luxottica founder’s son involved in alleged data access scheme, faces probe

Italian authorities have placed Leonardo Maria Del Vecchio, son of the late billionaire founder of Luxottica, and three others under house arrest as part of a probe into suspected illegal access to state databases. Del Vecchio, whose father created the Ray-Ban eyewear empire, is accused of employing a private intelligence agency, allegedly managed by a former police officer, to gather confidential data. The alleged access was reportedly linked to a family dispute over inheritance.

Del Vecchio’s lawyer, Maria Emanuela Mascalchi, said her client is “eagerly awaiting” the investigation’s conclusion, maintaining he has “nothing to do” with the allegations and is more a victim of the situation. Prosecutors allege that the intelligence agency illegally accessed data from state systems, including tax, police, and financial databases, which were reportedly used to blackmail business figures or sold to third parties.

The probe, which extends back to at least 2019 and continued until March 2024, highlights concerns about a lucrative market for sensitive information in Italy. Italy’s national anti-mafia prosecutor, Giovanni Melillo, remarked that the case has raised alarm over the existence of an underground market for confidential data, now operating on an industrial scale.

This case follows a recent investigation into a significant data breach at Italy’s largest bank, Intesa Sanpaolo, suggesting a wider issue of data misuse in the country.

Meta opposes Malaysia’s new social media licensing requirements

Meta Platforms has expressed concerns over Malaysia’s plan to require social media platforms to obtain regulatory licenses by 1 January 2025. The Malaysian government’s new regulation aims to combat online threats like scams, cyberbullying, and sexual crimes. However, Meta’s director of public policy for Southeast Asia, Rafael Frankel, criticised the timeline, arguing it’s ‘exceptionally accelerated’ and lacks clear guidelines, potentially hindering digital innovation and economic growth.

Malaysia announced in July that any social media or messaging service with over eight million users would need to comply or face legal repercussions. The policy has sparked backlash from industry groups, including Meta, which asked the government in August to reconsider. Communications Minister Fahmi Fadzil reiterated that tech companies must align with local laws to continue operating in Malaysia, signalling no plans for delay.

Frankel emphasised that Meta has yet to decide whether to apply for the license due to the vague regulatory framework, pointing out that similar regulations typically take years to finalise to avoid stifling innovation. While Malaysia’s communications ministry has yet to comment, Fahmi recently met with Meta representatives, thanking them for their cooperation but urging more action against harmful content, particularly regarding minors.

Meta has stated its shared commitment to online safety and is collaborating with Malaysian authorities to remove harmful content. Frankel argued that Meta already prioritises online safety and doesn’t require a licensing framework. Despite ongoing concerns, Meta hopes to work with the government to find a middle ground on the regulations before implementation.

Why does it matter?

Malaysia’s strict stance on harmful online content comes in response to a rise in social media-related issues. The government has been vocal about requiring platforms like Meta and TikTok to intensify content monitoring, especially around gambling, scams, child protection, cyberbullying, and sensitive topics related to race, religion, and royalty.

Apple Intelligence expands to the EU amid regulatory changes

Apple announced that its Apple Intelligence AI suite will be available in the European Union starting in April 2025, with localised language support to follow. The AI-powered feature set, which includes advanced tools such as Writing Tools, Genmoji, and a redesigned Siri with ChatGPT integration, has until now been limited to US English. The delay in the European rollout was previously attributed to compliance requirements under the EU’s Digital Markets Act (DMA), which applies to certain digital platforms to ensure competition and user privacy.

With iOS 18.1, Mac users in Europe can already access Apple Intelligence features by switching their language settings, while iPhone and iPad users must wait until next April. The release will come with support for a dozen languages throughout 2025, including French, German, Italian, and Spanish, broadening accessibility for EU users.

Apple’s phased rollout underscores the tech giant’s efforts to adapt its products to EU regulatory standards while maintaining a consistent experience for European users. Although some features, like notification summaries, may not be available initially, Apple has committed to bringing as many AI capabilities as possible to European devices in future updates.

US Commerce Department IoT panel recommends privacy labels for vehicles

The Commerce Department’s IoT Advisory Board has recommended that car dealers display privacy disclosures on vehicle windshields, urging government agencies and Congress to mandate this requirement. The report, developed with the officials from the National Institute of Standards and Technology (NIST), suggests including easy-to-understand privacy information on vehicle windshields, such as whether vehicles collect personal data and options for universal opt-outs.

This initiative aims to enhance consumer protection amid growing concerns over data privacy in connected cars. The board noted automakers often need to inform consumers about data practices adequately. Despite opposition from the Alliance for Automotive Innovation, the recommendation was adopted after a briefing highlighted the potential benefits of such labelling for consumer awareness.

“So many consumers tell us they had no idea their car is ‘a smartphone on wheels’ that can transmit data to the manufacturer and other companies,” said Amico, who runs Privacy4Cars, a privacy technology company which helps consumers and businesses better understand data privacy concerns related to connected cars. 

The report will be considered by a federal working group tasked with determining whether legislation or executive action is needed to implement the recommendations, including regulating third-party data sharing and simplifying privacy policies. The advisory board emphasised that this initiative could set a global standard for IoT device privacy. A few countries, e.g. Singapore, have created comprehensive standards around consumer Internet of Things devices, such as cybersecurity labelling schemes.

Apple, Goldman Sachs face penalties over Apple Card customer complaints

The United States has fined Apple and Goldman Sachs $89 million for allegedly misleading customers of their co-branded Apple Card and mishandling customer service. The Consumer Financial Protection Bureau (CFPB) accused both companies of failing to address user complaints properly and causing confusion over interest-free payment plans, impacting hundreds of thousands of Apple Card holders since its launch in 2019.

According to the CFPB, Apple did not forward thousands of customer disputes to Goldman Sachs, who also failed to follow federal guidelines in investigating the claims. Furthermore, the companies were found to have misled customers into believing that purchases of Apple products made with the Apple Card would qualify for automatic interest-free payments, resulting in unexpected charges for many.

CFPB Director Rohit Chopra stated that big tech and Wall Street firms are not exempt from federal laws, banning Goldman Sachs from issuing new consumer credit cards until it complies with regulatory standards. The bureau also criticised both companies for launching the Apple Card despite early technological issues, which led to delayed refunds and even damaged some users’ credit scores.

In response, Goldman Sachs and Apple said they had worked to address the issues, while Apple disputed the CFPB’s interpretation of events. Goldman Sachs has been ordered to pay $19.8 million in compensation and a $45 million fine, with Apple receiving a $25 million penalty.

Revived ‘Data Bill’ aims to increase economic gains and digital reform in the UK

The UK government is reintroducing its ‘Data (Use and Access) Bill’ to reform data regulations, projecting a £10B economic boost through streamlined data access and use. Aimed at enhancing efficiency in public sectors like healthcare and law enforcement, the bill also proposes expansions for digital identity verification, open-data projects, and digital registries. Technology Secretary Peter Kyle emphasised the potential to free public sector resources and reduce red tape, allowing people to focus on essential services.

The new bill also incorporates measures to improve data access for researchers, particularly on online risks, echoing aspects of the EU’s Digital Services Act. However, digital rights advocates like Open Rights Group have raised concerns, noting that the bill limits public protections against automated decisions by excluding regular personal data from the scope. This could allow organisations to make impactful automated decisions in areas such as employment and immigration without significant human oversight.

As the Bill reintroduces data reforms while retracting controversial proposals from the previous government, it also addresses updates to marketing rules and fines for privacy violations. These include cookie consent changes and stricter guidelines for unsolicited marketing. By adjusting these regulations, the UK government aims to keep pace with evolving digital standards while ensuring economic growth and improved public service delivery.

Hong Kong restricts apps like WhatsApp and WeChat for civil servants

The Hong Kong government has banned most civil servants from using widely used apps, including WhatsApp, WeChat, and Google Drive, on work computers to reduce security risks. The Digital Policy Office’s updated IT security guidelines allow government workers to access these services on personal devices at work, and managers can grant exceptions to the ban if required.

Experts in cybersecurity agree with the policy, pointing to similar restrictions in other governments, including the United States and China, amid increasing concerns over data leaks and hacking threats. Sun Dong, Secretary for Innovation, Technology and Industry, noted that stricter controls were essential given the growing complexity of cybersecurity challenges.

The ban is intended to minimise potential breaches by preventing malware from bypassing security measures through encrypted messages, according to Francis Fong, the honorary president of the Hong Kong Information Technology Federation. Anthony Lai, director of VX Research Limited, called the decision prudent, citing low cybersecurity awareness among some staff and limited monitoring of internal systems.

Data breaches have previously compromised tens of thousands of Hong Kong citizens’ personal information, raising public concern about government cybersecurity protocols. The updated guidelines aim to address these vulnerabilities while increasing overall data security.

Mother blames AI chatbot for son’s suicide in Florida lawsuit

A Florida mother is suing the AI chatbot startup Character.AI, alleging it played a role in her 14-year-old son’s suicide by fostering an unhealthy attachment to a chatbot. Megan Garcia claims her son Sewell became ‘addicted’ to Character.AI and formed an emotional dependency on a chatbot, which allegedly represented itself as a psychotherapist and a romantic partner, contributing to his mental distress.

According to the lawsuit filed in Orlando, Florida, Sewell shared suicidal thoughts with the chatbot, which reportedly reintroduced these themes in later conversations. Garcia argues the platform’s realistic nature and hyper-personalised interactions led her son to isolate himself, suffer from low self-esteem, and ultimately feel unable to live outside of the world the chatbot created.

Character.AI offered condolences and noted it has since implemented additional safety features, such as prompts for users expressing self-harm thoughts, to improve protection for younger users. Garcia’s lawsuit also names Google, alleging it extensively contributed to Character.AI’s development, although Google denies involvement in the product’s creation.

The lawsuit is part of a wider trend of legal claims against tech companies by parents concerned about the impact of online services on teenage mental health. While Character.AI, with an estimated 20 million users, faces unique claims regarding its AI-powered chatbot, other platforms such as TikTok, Instagram, and Facebook are also under scrutiny.

Apple offers $1M to hackers to secure private AI cloud

Apple is raising the stakes in its commitment to data security by offering up to $1M to researchers who can identify vulnerabilities in its new Private Cloud Compute service, set to debut next week. The service will support Apple’s on-device AI model, Apple Intelligence, enabling more powerful AI tasks while prioritising user privacy. The bug bounty program targets serious flaws, with the top rewards reserved for exploits that could allow remote code execution on Private Cloud Compute servers.

Apple’s updated bug bounty program also includes rewards up to $250,000 for any vulnerability that could expose sensitive customer information or user prompts processed by the private cloud. Security issues affecting sensitive user data in less critical ways can still earn researchers substantial rewards, signaling Apple’s broad commitment to protecting its users’ AI data.

With this move, Apple builds on past security initiatives, including its specialised research iPhones designed to enhance device security. The new Private Cloud Compute bug bounty is part of Apple’s approach to ensure that as its AI capabilities grow, so does its infrastructure to keep user data secure.