Texas lawmakers are considering a significant step in regulating artificial intelligence with the proposed Responsible AI Governance Act. The legislation targets high-risk AI systems, defining them as tools influencing decisions on education, employment, healthcare, and other critical services. Developers and deployers of such systems would face stringent requirements under the Act.
The draft mandates developers to produce detailed risk reports and maintain data records, ensuring transparency. Deployers must oversee human involvement in AI-driven decisions and report discrimination risks promptly. Regular assessments are required to address potential algorithmic biases and ensure compliance with intended uses.
The Act also sets clear prohibitions, including bans on systems manipulating behaviour, social scoring, and unauthorised biometric data collection. Developers and deployers must disclose to consumers when interacting with AI, providing clear explanations of system purposes and decision-making processes.
With enforcement led by the Texas Attorney General, businesses are urged to evaluate their practices and prepare for potential changes. The legislation could serve as a model for AI governance nationwide, shaping the future of ethical AI development and deployment.
A federal judge has upheld California’s law, SB 976, which restricts companies from serving addictive content feeds to minors. The decision allows the legislation to take effect, beginning a significant shift in how social media platforms operate in the state.
Companies must now ensure that addictive feeds, defined as algorithms recommending content based on user behaviour rather than explicit preferences, are not shown to minors without parental consent. By 2027, businesses will also need to implement age assurance techniques, such as age estimation models, to identify underage users and tailor their feeds accordingly.
The tech industry group NetChoice, representing firms like Meta, Google, and X, attempted to block the law, citing First Amendment concerns. While the judge dismissed their challenge to the addictive feeds provision, certain aspects of the law, such as limits on nighttime notifications for minors, were blocked.
This ruling marks a notable step in California’s efforts to regulate the digital landscape and protect younger users from potentially harmful online content.
A US Army soldier, Cameron John Wagenius, has been charged with selling and attempting to sell stolen confidential phone records. Arrested on 20 December, Wagenius faces two charges of unlawfully transferring confidential information in a Texas federal court. His rank and station have not been disclosed, though he is reportedly based at Fort Cavazos in Texas.
Authorities allege that Wagenius, known online as ‘Kiberphant0m’, claimed involvement in hacking activities, including phone records linked to high-profile figures. The case is connected to a broader investigation involving hackers accused of stealing sensitive personal and financial information. Prosecutors have revealed the involvement of a hacking group targeting data storage firm Snowflake’s customers.
Cybersecurity researchers identified Wagenius after members of the group issued threats against them. Law enforcement acted swiftly following the tip-off, according to Allison Nixon of Unit 221B. The prosecution is being handled in Seattle, where two co-defendants, Connor Moucka and John Binns, face related charges for extensive data breaches.
The Department of Justice and the FBI have yet to comment on the case. Wagenius has been ordered to appear in Seattle, where the investigation continues.
As 2024 concludes, China’s AI sector is making global waves with groundbreaking innovations. DeepSeek, a Hangzhou-based startup, unveiled its V3 large-scale language model, which rivals leading proprietary models like GPT-4o. Remarkably, the V3 was developed in just two months with minimal resources, showcasing China’s ability to deliver cutting-edge AI solutions at significantly lower costs. Experts have praised the model’s efficiency and ingenuity, highlighting its potential to disrupt the industry.
China’s AI ambitions extend beyond language models. In November, ShengShu Technology introduced Vidu-1.5, an image-to-video tool that generates dynamic visuals in record time. The tool gained recognition for its creative applications, such as crafting an ink-style promotional video for Sony’s ‘Venom: The Last Dance.’ The innovation has drastically reduced production times and costs in the film industry, inspiring artists with its blend of tradition and technology.
AI-driven creativity also thrives in literature and virtual interaction. Researchers at East China Normal University used AI to author fantasy novels, completing projects in weeks that would take human authors a year. Meanwhile, apps like Xingye are redefining digital companionship, integrating AI chatbots with user-generated content to create unique community experiences. These advancements have resonated globally, with Chinese AI apps gaining popularity in markets like the United States.
E-commerce sector in China is leveraging AI to transform operations and consumer experiences. Entrepreneurs like Lyu Hongwei have used AI to identify trends, tailor product offerings, and accelerate growth. Analysts predict that AI-driven tools will continue to enhance business efficiency, paving the way for a more personalised and streamlined shopping experience.
The US Department of Justice and the Federal Trade Commission have initiated legal proceedings against fintech company Dave and its CEO, Jason Wilk. Allegations include deceptive advertising practices linked to cash advances promoted on the platform, some of which users reportedly never received.
Authorities argue the company engaged in unfair practices, including hidden fees, misuse of customer tips, and inadequate cancellation processes for recurring charges. The complaint seeks monetary penalties, consumer redress, and measures to prevent future violations.
Dave denies the allegations, asserting many claims are inaccurate. The company has introduced a simplified fee structure, removing tips and express fees regulators criticised. However, the updated structure was implemented on 4 December for new users, with existing customers transitioning gradually.
The legal filing replaces an earlier complaint from November, initially targeting the company without seeking penalties. Regulators now aim for broader accountability by including the CEO in the amended complaint.
Healthcare organizations in the US may face stricter cybersecurity rules to address the growing threat of data breaches. Proposals introduced by the Biden administration seek to prevent sensitive patient information from being leaked through hacking or ransomware attacks. Measures include mandatory encryption and compliance checks to enhance network security.
Data breaches have exposed the healthcare information of over 167 million people in 2023 alone, according to Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technology. The updated standards, introduced by the Office for Civil Rights under the Health Insurance Portability and Accountability Act (HIPAA), are estimated to cost $9 billion in the first year and $6 billion annually in subsequent years.
Officials highlighted the rising danger of healthcare cyberattacks, with hacking and ransomware incidents increasing by 89% and 102% respectively since 2019. Hospitals often face operational disruption, while leaked data can lead to blackmail. A 60-day public comment period will allow stakeholders to provide input before finalising the rules.
The new standards are designed to safeguard healthcare networks and protect Americans’ private information, including mental health records. Strengthened cybersecurity is expected to reduce vulnerabilities and ensure the safety of critical healthcare systems.
A Moscow court has fined TikTok three million roubles (around $28,930) for failing to meet Russian legal requirements. The court’s press service confirmed the verdict but did not elaborate on the specific violation.
The social media platform, owned by ByteDance, has been facing increasing scrutiny worldwide. Allegations of non-compliance with legal frameworks and security concerns have made headlines in multiple countries.
TikTok encountered further setbacks recently, including a year-long ban in Albania last December. Canadian authorities also ordered the company to halt operations, citing national security threats.
The fine in Russia reflects the mounting regulatory challenges for TikTok as it navigates stricter oversight in various regions.
A series of intrusions targeting Chrome browser extensions has compromised multiple companies since mid-December, experts revealed. Among the victims is Cyberhaven, a California-based data protection company. The breach, confirmed by Cyberhaven on Christmas Eve, is reportedly part of a larger campaign aimed at developers of Chrome extensions across various industries.
Cyberhaven stated it is cooperating with federal law enforcement to address the issue. Browser extensions, commonly used to enhance web browsing, can also pose risks when maliciously altered. Cyberhaven’s Chrome extension, for example, is designed to monitor and secure client data within web-based applications.
Experts identified other compromised extensions, including those involving AI and virtual private networks. Jaime Blasco, cofounder of Texas-based Nudge Security, noted that the attacks appear opportunistic, aiming to harvest sensitive data from numerous sources. Some breaches date back to mid-December, indicating an ongoing effort.
Federal authorities, including the US cyber watchdog CISA, have redirected inquiries to the affected companies. Alphabet, maker of the Chrome browser, has yet to respond to requests for comment.
As concerns grow over the impact of smartphones on children, several European countries are implementing or debating restrictions on their use in schools. France, for example, has prohibited phones in primary and secondary schools since 2018 and recently extended the policy to include ‘digital breaks’ at some institutions. Similarly, the Netherlands and Hungary have adopted bans, with exceptions for educational purposes or special needs, while Italy, Greece, and Latvia have also imposed restrictions.
The debate is fueled by studies showing that smartphones can distract students, though some argue they can also be useful for learning. A 2023 UNESCO report recommended limiting phones in schools to support education, with more than 60 countries now following similar measures. However, enforcement remains a challenge, as some reports suggest that many students still find ways to use their devices despite the bans.
Experts remain divided on the issue. While some highlight the risks of distraction and mental health impacts, others emphasise the need for balance. ‘Banning phones can be beneficial, but we must ensure children have adequate alternatives for education and communication,’ said Ben Carter, a professor of medical statistics at King’s College London.
The trend reflects broader concerns about screen time among children, with countries like Sweden and Luxembourg calling for clearer rules to promote healthier digital habits. While opinions differ, the growing movement underscores a collective effort to create focused, engaging, and healthier learning environments.
In the rapidly expanding online world, teenagers are becoming prime targets for scammers. Over a recent five-year period, financial losses reported by teens increased by an alarming 2,500%, outpacing the 805% rise among seniors. Experts attribute this to scammers exploiting the tech-savviness of younger users while capitalising on their lack of experience.
Scammers use various tactics, including impersonating online influencers, romance schemes, and phishing for sensitive information through gaming platforms. One growing threat involves sextortion, where victims are coerced into sharing explicit images that are later used to demand money under the threat of public exposure. Tragically, such incidents have already led to devastating consequences, including teen suicides.
Parents are urged to foster open communication with their children about these risks, creating a safe space for them to share any unsettling online encounters. Basic steps like monitoring app usage, staying connected on social media, and setting clear tech boundaries can go a long way in shielding teens from these dangers. The key, experts stress, is building trust and ensuring children know they have unwavering support, no matter the situation.