AI workflows spark stress and productivity concerns

AI tools were introduced at Everest PR to streamline tasks, but the results were not as expected. Founder Anurag Garg noticed that instead of boosting efficiency, the technology created additional stress. His team reported that using AI tools like ChatGPT was time-consuming and added new complexities, leading to frustration and burnout.

Garg’s team struggled to keep up with frequent software updates and found that managing multiple AI platforms made their work harder. This sentiment is echoed in surveys showing many workers feel AI tools increase their workloads rather than reduce them. A study revealed that 61% believe AI will increase their chances of burnout, with the figure rising to 87% among younger workers.

Even legal professionals are feeling overwhelmed by AI’s impact on their workloads. Leah Steele, a coach for lawyers, explained that tech-driven environments often lead to reduced job satisfaction and fear of redundancy. The Law Society also highlights the challenges of implementing AI, emphasising that learning new tools requires time and effort, which can add pressure rather than alleviate it.

While some argue that AI can empower small firms by enhancing productivity, others stress the need for proper usage to prevent overwhelm. Garg has now reduced his team’s reliance on AI, finding that a more selective approach has improved employee well-being and reconnected them with their work.

Zoom expands into healthcare with AI note-taking tool

Zoom has announced a partnership with Suki, a leading AI medical scribe provider, to offer doctors on its platform an AI-powered tool that automates note-taking during telehealth visits. With Zoom accounting for over a third of telehealth appointments in the US, this move aims to help clinicians reduce time spent on paperwork, improving efficiency during virtual consultations.

The partnership marks Zoom’s shift from solely being a video-conferencing company to integrating AI tools designed for workplace efficiency, a vision supported by its CEO, Eric Yuan. Suki was selected after Zoom evaluated other AI medical scribe startups, further boosting Suki’s presence after raising $70M in funding earlier this month.

This development highlights a broader trend in healthcare, with companies like Amazon’s One Medical and Microsoft’s Nuance also leveraging AI for medical note-taking, helping providers manage documentation more effectively. Despite growing competition, investors believe there is still room for specialised AI solutions in both large healthcare systems and smaller medical practices.

BlackRock unveils new funds focused on AI growth

BlackRock, the world’s largest asset manager, has launched two new exchange-traded funds (ETFs) to provide investors with exposure to the rapidly growing market for AI. AI is predicted to have widespread applications across industries, and BlackRock sees it as a major force driving long-term investment opportunities.

The iShares A.I. Innovation and Tech Active ETF will focus on global AI and technology stocks, while the iShares Technology Opportunities Active ETF will invest in tech companies across various sectors, including semiconductors, software, and hardware. Both funds are designed to help investors capitalise on the increasing integration of AI into different industries.

Despite mixed demand for thematic ETFs recently, BlackRock‘s move reflects its confidence in AI’s potential. The company continues to believe AI will shape the future of industries from technology to financial services, offering unique investment opportunities.

Earlier this month, BlackRock reported record-high assets under management, boosted by a strong US stock market rally, further highlighting its successful strategy in the investment market.

Anthropic launches AI to streamline developer workflows

Anthropic, the AI startup backed by Alphabet and Amazon, has launched updated AI models with a new feature designed to automate computer tasks, reducing the need for human interaction. The company’s latest innovation allows AI to perform actions like moving the mouse, clicking, and typing, simplifying complex tasks for software developers. This capability brings Anthropic closer to creating AI agents that can handle multi-step processes, a significant advancement beyond traditional chatbots.

The new feature, included in Anthropic’s mid-tier Claude 3.5 Sonnet model, is tailored to help developers with tasks like coding and even navigating programs like Google Search or Apple Maps. While it shows promise, the company has implemented safeguards to prevent misuse, such as spam or election interference. Anthropic continues to seek feedback from businesses to refine the tool and is exploring how to make it available to consumers in the future.

Anthropic’s Chief Science Officer, Jared Kaplan, demonstrated the potential of this AI to automate workflows, while Instagram co-founder Mike Krieger, now Anthropic’s chief product officer, expressed excitement about further advancing the technology to fully automate tasks like booking flights.

Thousands of artists protest AI’s unlicensed use of their work

Thousands of creatives, including Kevin Bacon, Thom Yorke, and Julianne Moore, have signed a petition opposing the unlicensed use of their work to train AI. The 11,500 signatories believe that such practices threaten their livelihoods and call for better protection of creative content.

The petition argues that using creative works without permission for AI development is an ‘unjust threat’ to the people behind those works. Signatories from various industries, including musicians, writers, and actors, are voicing concerns over how their work is being used by AI companies.

British composer Ed Newton-Rex, who organised the petition, has spoken out against AI companies, accusing them of ‘dehumanising’ art by treating it as mere ‘training data’. He highlighted the growing concerns among creatives about how AI may undermine their rights and income.

The United Kingdom government is currently exploring new regulations to address the issue, including a potential ‘opt out’ model for AI data scraping, as lawmakers look for ways to protect creative content in the digital age.

CrewAI helps businesses automate with third-party AI tools

CrewAI, a startup founded by João Moura, is revolutionising back-office automation by leveraging third-party AI models from companies like OpenAI and Anthropic. Instead of building its own AI, CrewAI enables businesses to create custom workflows that automate repetitive tasks such as report summarisation and onboarding processes. Through a simple dashboard, customers can deploy and manage their AI-driven automations, using the tools they already rely on.

Positioned as a more flexible alternative to traditional robotic process automation (RPA), CrewAI allows companies to integrate AI ‘agents’ that can handle complex tasks without rigid, pre-set rules. This adaptability, along with a focus on data privacy, is drawing the attention of investors, with the startup raising $18M in funding and attracting 150 customers within its first year of operation.

With competition from other AI-driven automation startups, CrewAI is looking to expand further, offering its new Enterprise Cloud subscription plan, which includes enhanced security features and templates for workflow creation. Based in San Francisco, US and Brazil, the company aims to grow its workforce and continue advancing its automation tools.

Denmark enhances digital security and innovation with expanded cyber strategy

The Danish government has relaunched the National Cyber Security Council (NCSC) with an expanded mandate to strengthen digital security across critical sectors while advancing AI capabilities. That effort is part of a larger initiative that includes the country’s €100 million National Strategy for Digitalisation (NSD), which supports AI development through regulatory sandboxes and guidelines aligned with the EU’s AI Act.

The NCSC will promote public-private partnerships, enhance data sharing between government, businesses, and academia, and protect critical infrastructure from rising cyber threats. In tandem, the government’s Artificial Intelligence Guideline (AIG) project helps companies and public authorities adopt AI securely, offering a framework to test and integrate AI technologies within a regulatory safe zone. These combined efforts boost digital transformation while ensuring strong cybersecurity and legal compliance.

The NCSC’s new mission addresses growing cybersecurity challenges, particularly in light of geopolitical instability, such as Russia’s invasion of Ukraine. The council aims to foster collaboration between national security agencies and small and medium-sized enterprises (SMEs) by assembling experts from key sectors, including businesses, universities, and municipalities.

The Danish government’s investment in AI development is also supported by regulatory sandboxes that allow companies to innovate safely within the EU legal frameworks like GDPR and the AI Act. The broader NSD also targets improvements in digital education, workforce skills, and business transformation, ensuring that a solid security and regulatory oversight foundation underpins Denmark’s push for innovation.

Marvell Technology announces price increase amid rising AI demand and robust growth

Marvell Technology, a leading US chip manufacturer, has announced it will raise prices across its entire product line starting January 1, marking the first major price increase in the optical communications sector. This decision comes after Marvell’s strong financial performance last quarter, driven by the surging demand for AI-related products, including ASICs and silicon photonics for data centres. The price hike is seen as a way to capture new market opportunities and support ongoing investments in innovative technologies.

A leaked notification letter from Marvell’s Senior Vice President of Global Sales, Dean Jarnac, revealed that the global demand for AI and accelerated computing is pushing companies like Marvell to expand production capacity and invest in new manufacturing bases. Jarnac emphasised that the price increase is necessary to support these investments, but assured customers that the impact would be minimised and encouraged them to plan their orders accordingly.

Marvell’s recent growth has been fueled by booming demand in the AI space, particularly in its data centre business. Key products such as 800G PAM and 400ZR optical solutions have been central to this success. Marvell’s CEO Matt Murphy highlighted the company’s optimistic outlook, expecting continued revenue growth in the coming quarter as demand for AI and data centre solutions continues to rise.

Neysa secures $30M as it challenges global AI leaders

Indian AI startup Neysa has raised $30 million in a Series A funding round, aiming to compete with global hyperscalers like AWS and Google Cloud. Led by Sharad Sanghi, Neysa offers AI infrastructure and machine learning platforms, catering to businesses seeking flexibility in AI solutions. The startup focuses on both public cloud and private clusters, differentiating itself by using open-source platforms with no client lock-in.

Neysa plans to use the new funding to enhance its infrastructure, expand research and development, and introduce new products, including a developer platform and inference-as-a-service. Since launching its flagship platform Velocis in July, the Mumbai-based company has grown to 12 paying customers across industries like banking and media, with 70% opting for private clusters. The startup expects to enter global markets in the coming months, with additional funding already in the works.

Neysa’s rise reflects the growing demand for AI infrastructure in India, where the market is projected to reach $17 billion by 2027. With fresh capital and plans for expansion, Neysa is positioning itself as a significant player in the AI space, both in India and abroad.

AI in waste management raises privacy concerns

Cities are increasingly turning to AI to enhance waste management and reduce contamination in recycling and composting efforts. In East Lansing, Michigan, where a significant student population often contributes to recycling contamination, city officials have launched a pilot program using AI to address the issue. The initiative includes equipping recycling trucks with AI-powered cameras that identify non-recyclable items and sending personalised postcards to residents to inform them of their mistakes. This approach has reportedly led to a 20% reduction in recycling contamination.

Despite these promising results, privacy concerns have arisen regarding the collection of personal data through these AI systems. Experts warn that the information gathered from residents’ trash could expose sensitive details about their lives, potentially leading to identity theft or misuse by authorities. For instance, a discarded pregnancy test could be used against a woman in states with strict abortion laws. This phenomenon, referred to as ‘mission creep,’ raises alarms about how technologies designed for one purpose can evolve into surveillance tools.

City officials, like East Lansing’s environmental sustainability manager Cliff Walls and Leduc’s environmental manager Michael Hancharyk, acknowledge these privacy issues and are taking steps to mitigate risks. They emphasise working with vendors to ensure data protection and transparency with residents. Hancharyk noted that his city had to comply with Alberta’s privacy regulations before implementing its program.

While acknowledging the importance of improving waste management, cybersecurity experts stress the need for municipalities to carefully weigh the benefits of AI against the potential risks to residents’ privacy. They advocate for thorough assessments of new technologies and their implications, particularly for sensitive populations. As cities continue to innovate in waste management, striking a balance between efficiency and privacy will be crucial.