Turkcell and ZTE have set a new European record by achieving the fastest 5G-Advanced speed of 32 Gbps during a trial in Istanbul on 5 November 2024. The groundbreaking milestone was made possible using ZTE’s advanced 1.6 GHz BW mmWave AAU, 64TR N78 AAU, and commercial CPE.
The significance of 5G-Advanced technology lies in its ability to offer faster data transmission, lower latency, and higher capacity, which will enable a wide range of applications, such as live broadcasting, extended reality (XR), ultra-HD video streaming, and ultra-low latency services. These advancements promise to provide users with an immersive audio-visual experience, setting a new standard for the digital landscape. The improved network capabilities will also open up new business opportunities, particularly in sectors like smart cities and autonomous vehicles, where high-speed connectivity is crucial.
Why does it matter?
That achievement underscores the strong collaboration between Turkcell and ZTE, which has been key to driving innovation in 5G technology. Both companies are committed to continuing their research and development efforts to expand the potential of 5G-Advanced. Their joint work aims to deliver smarter, more efficient, and more immersive user experiences, creating new opportunities for businesses and consumers in an increasingly digital world.
Aravind Srinivas, CEO of AI search company Perplexity, offered to step in and support New York Times operations amid a looming strike by the newspaper’s tech workers. The NYT Tech Guild announced the planned strike for November 4 after months of seeking better pay and working conditions. Representing workers involved in software support and data analysis on the business side, the guild has requested a 2.5% annual wage increase and to secure a two-day in-office work policy.
As tensions escalated, New York Times publisher AG Sulzberger called the timing of the strike ‘troubling’, noting that the paper’s election coverage is a public service at a crucial time. Responding publicly, Srinivas offered to help ensure uninterrupted access to the Times’s election news, sparking controversy as critics accused him of ‘scabbing’, a term for working in place of striking employees.
Srinivas clarified that his intent was to provide infrastructure support, not replace journalists, as his company has recently launched its own election information platform. However, the New York Times and Perplexity have been at odds recently, with the Times issuing a cease-and-desist letter last month over Perplexity’s alleged scraping of its content for AI use.
At TechCrunch Disrupt 2024, data management leaders advised AI-driven businesses to focus on incremental, practical applications rather than expansive, large-scale projects. Chet Kapoor, CEO of DataStax, stressed that AI’s effectiveness relies heavily on having robust, unstructured data at scale, but warned companies against rushing into overly ambitious initiatives. The discussion featured insights from Kapoor, Vanessa Larco of NEA, and Fivetran’s CEO George Fraser, all of whom advocated a targeted approach to data application in generative AI.
Rather than applying AI across all company functions immediately, Larco suggested that firms begin with well-defined objectives. Identifying relevant data is key, she said, and applying it selectively can avoid the pitfalls of costly errors. Companies looking to capitalise on AI should ‘work backwards’, focusing first on the issue to be solved and gathering the specific data required, Larco added.
Fraser underscored the importance of addressing current needs before planning for broader scaling. Many innovation costs, he pointed out, stem from projects that fail rather than those that succeed. His advice: ‘Only solve the problems you have today’.
Kapoor likened today’s generative AI era to the early days of mobile apps, emphasising that most AI projects are currently in exploratory stages. He believes next year will see transformational AI applications begin to shift company trajectories.
The EU‘s voluntary code of practice on disinformation will soon become a formal set of rules under the Digital Services Act (DSA). According to Paul Gordon, assistant director at Ireland’s media regulator Coimisiúin na Meán, efforts are underway to finalise the transition by January. He emphasised that the new regulations should lead to more meaningful engagement from platforms, moving beyond mere compliance.
Originally established in 2022 and signed by 44 companies, including Google, Meta, and TikTok, the code outlines commitments to combat online disinformation, such as increasing transparency in political advertising and enhancing cooperation during elections. A spokesperson for the European Commission confirmed that the code aims to be recognised as a ‘Code of Conduct’ under the DSA, which already mandates content moderation measures for online platforms.
The DSA, which applies to all platforms since February, imposes strict rules on the largest online services, requiring them to mitigate risks associated with disinformation. The new code will help these platforms demonstrate compliance with the DSA’s obligations, as assessed by the Commission and the European Board of Digital Services. However, no specific timeline has been provided for the code’s formal implementation.
An AI transcription tool called Whisper, developed by OpenAI and used by thousands of clinicians and health systems, has come under scrutiny after researchers found it sometimes produces inaccurate transcriptions. Whisper, which powers the medical transcription tool from the company Nabla, has reportedly transcribed around 7 million medical conversations. While it accurately summarises many doctor-patient exchanges, researchers from Cornell University and the University of Washington discovered instances where the AI-generated entirely fabricated sentences, sometimes even adding irrelevant or nonsensical phrases.
The study, which was presented at the Association for Computing Machinery FAccT conference in Brazil in June, highlighted that Whisper made errors in about 1 percent of transcriptions, often producing ‘hallucinations’ — fabricated statements in response to silences during conversations. These inaccuracies were especially common in audio samples featuring patients with aphasia, a language disorder that results in frequent pauses. In one case, Whisper inserted phrases that were more typical of a YouTube video, such as “Thank you for watching!”
Nabla, aware of the issue, has stated it is working on solutions to mitigate these hallucinations. In response, OpenAI emphasised its commitment to reducing such errors, particularly in high-stakes situations like healthcare. An OpenAI spokesperson noted that Whisper’s usage policies discourage its application in critical decision-making contexts and that guidance for open-source use advises against deployment in high-risk domains.
The study’s findings underscore the complexities of applying AI tools in sensitive settings like healthcare, where precise communication is vital. With Whisper being used across 40 healthcare systems, the issue raises broader questions around the suitability of AI transcription tools in medical environments and the ongoing need for oversight in their deployment.
The Open Source Initiative (OSI) has introduced version 1.0 of its Open Source AI Definition (OSAID), setting new standards for AI transparency and accessibility. Developed over the years in collaboration with academia and industry, the OSAID aims to establish clear criteria for what qualifies as open-source AI. The OSI says the definition will help align policymakers, developers, and industry leaders on a common understanding of ‘open source’ in the rapidly evolving field of AI.
According to OSI Executive Vice President Stefano Maffulli, the goal is to make sure AI models labelled as open source provide enough detail for others to recreate them and disclose essential information about training data, such as its origin and processing methods. The OSAID also emphasises that open source AI should grant users freedom to modify and build upon the models, without restrictive permissions. While OSI lacks enforcement power, it plans to advocate for its definition as the AI community’s reference point, aiming to combat “open source” claims that don’t meet OSAID standards.
The new definition comes as some companies, including Meta and Stability AI, use the open-source label without fully meeting transparency requirements. Meta, a financial supporter of the OSI, has voiced reservations about the OSAID, citing the need for protective restrictions around its Llama models. In contrast, OSI contends that AI models should be openly accessible to allow for a truly open-source AI ecosystem, rather than restricted by proprietary data and usage limitations.
Maffulli acknowledges the OSAID may need frequent updates as technology and regulations evolve. OSI has created a committee to monitor its application and adjust as necessary, with an eye on refining the open-source definition to address emerging issues like copyright and proprietary data.
China has launched a pilot program to expand foreign investment in its value-added telecom services sector, allowing foreign companies to wholly own businesses such as internet data centres and engage in online data and transaction processing. The initiative is being implemented in four key regions – Beijing’s national demonstration zone, Shanghai’s free trade zone, the Hainan Free Trade Port, and Shenzhen’s socialist modernisation pilot zone.
The program aims to align China’s telecom sector with high-standard international economic and trade rules, improve regulatory frameworks, and reduce market barriers for foreign investors. By opening up sectors like cloud computing and computing power services, China seeks to diversify market supply, boost innovation, and foster greater integration of digital technologies across industries.
In response to this initiative, companies like HSBC are preparing to participate, with HSBC Fintech Services applying for an internet content provider permit to enhance its digital services and business transformation. The Ministry of Industry and Information Technology (MIIT) has committed to monitoring the program’s effects, possibly expanding its scope based on its success. By improving the business environment and encouraging new business models, China is positioning itself as a more attractive destination for foreign investment in the telecommunications sector.
Ofcom has linked the violent unrest in England and Northern Ireland during the summer to the rapid spread of harmful content on social media platforms. The media regulator found that disinformation and illegal posts circulated widely online following the Southport stabbings in July, which sparked the disorder.
While some platforms acted swiftly to remove inflammatory content, others were criticised for uneven responses. Experts highlighted the significant influence of social media in driving divisive narratives during the crisis, with some calling for platforms to be held accountable for unchecked dangerous content.
Ofcom, which has faced criticism for its handling of the situation, argued that its enhanced powers under the forthcoming Online Safety Act were not yet in force at the time. The new legislation will introduce stricter responsibilities for tech firms in tackling harmful content and disinformation.
The unrest, the worst seen in the United Kingdom in a decade, resulted in arrests and public scrutiny of tech platforms. A high-profile row erupted between the Prime Minister and Elon Musk, after the billionaire suggested that civil war was inevitable following the disorder, a claim strongly rebuked by Sir Keir Starmer.
The Federal Communications Commission (FCC) has enacted new regulations requiring all mobile phones sold in the US to be compatible with hearing aids, significantly enhancing accessibility for individuals with hearing loss. Specifically, these rules mandate that manufacturers adopt standard Bluetooth coupling for universal connectivity, thereby eliminating proprietary standards.
In addition, mobile handsets must meet specific volume benchmarks to ensure that sound quality is maintained when the volume is increased. Furthermore, to inform consumers, handset manufacturers must clearly label their devices to indicate compliance with these new hearing aid compatibility standards.
Notably, these changes stem from years of study and advocacy by the Hearing Aid Compatibility (HAC) Task Force, which provided recommendations to the FCC. As a result, the FCC’s regulations aim to provide greater choice and improved functionality for the 48 million Americans with hearing loss, ensuring they can access a wider range of mobile technologies and features.
CS Disco, Inc. has officially launched its AI-driven Cecilia platform in the European Union and the United Kingdom. The Cecilia AI Platform helps legal professionals review large datasets faster, allowing for quicker identification and analysis of crucial documents. The platform offers tools like Cecilia Q&A, which answers fact-based questions from a user’s document set, streamlining the review process.
The company’s generative AI capabilities are designed to boost efficiency in legal work, with features such as single document Q&A and document summaries helping attorneys quickly navigate complex or lengthy documents. The platform also supports documents in multiple languages, offering significant time savings compared to traditional methods.
Early adopters in the United States have already reported success with Cecilia’s tools, praising their speed and accuracy. CS Disco is focusing on enabling legal teams to handle large volumes of data with greater precision, as it expands its services to the European market.
The Cecilia platform is expected to grow further, with additional AI features planned for release in the EU and UK by 2025. DISCO aims to continue its role as a leader in AI-enabled legal technology, improving outcomes for clients across different markets.