Scotland’s Makar, Peter Mackay, has voiced concerns about the growing role of artificial intelligence in literature, warning that it could threaten the livelihoods of new writers. With AI tools capable of generating dialogue, plot ideas, and entire narratives, Mackay fears that competing with machine-created content may become increasingly difficult for human authors.
To address these challenges, he has proposed clearer distinctions between human and AI-generated work. Ideas discussed include a certification system similar to the Harris Tweed Orb, ensuring books are marked as ‘100% AI-free.’ Another suggestion is an ingredient-style label outlining an AI-generated book’s influences, listing percentages of various literary styles.
Mackay also believes literary prizes, such as the Highland Book Prize, can play a role in safeguarding human creativity by celebrating originality and unique writing styles and qualities that AI struggles to replicate. He warns of the day an AI-generated book wins a major award, questioning what it would mean for writers who spend years perfecting their craft.
South Korea’s National Intelligence Service (NIS) has raised concerns about the Chinese AI app DeepSeek, accusing it of excessively collecting personal data and using it for training purposes. The agency warned government bodies last week to take security measures, highlighting that unlike other AI services, DeepSeek collects sensitive data such as keyboard input patterns and transfers it to Chinese servers. Some South Korean government ministries have already blocked access to the app due to these security concerns.
The NIS also pointed out that DeepSeek grants advertisers unrestricted access to user data and stores South Korean users’ data in China, where it could be accessed by the Chinese government under local laws. The agency also noted discrepancies in the app’s responses to sensitive questions, such as the origin of kimchi, which DeepSeek claimed was Chinese when asked in Chinese, but Korean when asked in Korean.
DeepSeek has also been accused of censoring political topics, such as the 1989 Tiananmen Square crackdown, prompting the app to suggest changing the subject. In response to these concerns, China’s foreign ministry stated that the country values data privacy and security and complies with relevant laws, denying that it pressures companies to violate privacy. DeepSeek has not yet commented on the allegations.
Aiman Ezzat, CEO of Capgemini, has criticised the European Union’s AI regulations, claiming they are overly restrictive and hinder the ability of global companies to deploy AI technology in the region. His comments come ahead of the AI Action summit in Paris and reflect increasing frustration from private sector players with EU laws. Ezzat highlighted the complexity of navigating different regulations across countries, especially in the absence of global AI standards, and argued that the EU’s AI Act hailed as the most comprehensive worldwide, could stifle innovation.
As one of Europe’s largest IT services firms, Capgemini works with major players like Microsoft, Google Cloud, and Amazon Web Services. The company is concerned about the implementation of AI regulations in various countries and how they affect business operations. Ezzat is hopeful that the AI summit will provide an opportunity for regulators and industry leaders to align on AI policies moving forward.
Despite the regulatory challenges, Ezzat spoke positively about DeepSeek, a Chinese AI firm gaining traction by offering cost-effective, open-source models that compete with US tech giants. However, he pointed out that while DeepSeek shares its models, it is not entirely open source, as there is limited access to the data used for training the models. Capgemini is in the early stages of exploring the use of DeepSeek’s technology with clients.
As concerns about AI’s impact on privacy grow, European data protection authorities have begun investigating AI companies, including DeepSeek, to ensure compliance with privacy laws. Ezzat’s comments underscore the ongoing tension between innovation and regulation in the rapidly evolving AI landscape.
Britain’s security officials have reportedly ordered Apple to create a so-called ‘back door’ to access all content uploaded to the cloud by its users worldwide. The demand, revealed by The Washington Post, could force Apple to compromise its security promises to customers. Sources suggest the company may opt to stop offering encrypted storage in the UK rather than comply with the order.
Apple has not yet responded to requests for comment outside of regular business hours. The Home Office has served Apple with a technical capability notice, which would require the company to grant access to the requested data. However, a spokesperson from the Home Office declined to confirm or deny the existence of such a notice.
In January, Britain initiated an investigation into the operating systems of Apple and Google, as well as their app stores and browsers. The ongoing regulatory scrutiny highlights growing tensions between tech giants and governments over privacy and security concerns.
Vice President JD Vance will lead the US delegation to a major AI summit in Paris next week, but technical staff from the AI Safety Institute will not be included. Around 100 countries will take part in discussions on AI’s potential during the event on 10 and 11 February.
Representatives from the White House Office of Science and Technology Policy will attend, including Principal Deputy Director Lynne Parker and Senior Policy Advisor Sriram Krishnan. However, the Trump administration has scrapped plans for officials from the Commerce and Homeland Security departments to join, including members of the AI Safety Institute.
The institute, created under former President Joe Biden, focuses on AI risk mitigation and has collaborated with companies like OpenAI and Anthropic. Its future under the new administration remains uncertain, especially following Trump’s decision to revoke a Biden-era AI executive order.
The absence of Commerce Department officials may reflect ongoing transitions following the 20 January inauguration. The Paris summit will focus less on AI dangers than previous meetings in Bletchley Park and Seoul, a topic dismissed by some in the technology sector.
Pinterest projected first-quarter revenue exceeding market expectations, driven by AI-powered advertising tools that enhanced ad spending. Shares surged 19% in extended trading following the announcement. The platform benefited from a strong holiday shopping season, setting new records for monthly active users and revenue in the fourth quarter.
AI-driven ad solutions, including the Performance+ suite, have attracted advertisers by automating and improving targeting. Increased engagement from Gen Z users and the introduction of more shoppable content have also made the platform more appealing to marketers. Expanding partnerships with Google and Amazon further diversified revenue streams, although most ad revenue remains concentrated in North America.
Ecommerce merchants using Shopify and Adobe Commerce can now integrate their products into Pinterest more easily. Analysts suggest that while global engagement is high, expanding third-party ad integrations will be crucial for long-term growth.
The company forecasts revenue between $837 million and $852 million, surpassing analyst expectations. Adjusted core earnings are expected to range from $155 million to $170 million, also exceeding estimates. Monthly active users reached a record 553 million, reflecting an 11% year-on-year increase.
South Korea has temporarily blocked employee access to Chinese AI startup DeepSeek over security concerns. A government notice urged ministries and agencies to exercise caution when using AI services, including DeepSeek and ChatGPT. Korea Hydro & Nuclear Power, the defence ministry, and the foreign ministry have all imposed restrictions on DeepSeek access.
Australia and Taiwan have already banned DeepSeek from government devices, citing security risks. Italy previously ordered the company to block its chatbot over privacy concerns. Authorities in the US, India, and parts of Europe are also reviewing the implications of using the AI service. South Korea’s privacy watchdog plans to question DeepSeek on its handling of user data.
Korean businesses are also tightening restrictions on generative AI. Kakao Corp advised employees to avoid using DeepSeek, despite its recent partnership with OpenAI. SK Hynix has limited access to generative AI services, and Naver has asked employees not to use AI tools that store data externally.
DeepSeek has not yet responded to requests for comment. The company’s latest AI models, released last month, have drawn attention for their capabilities and cost efficiency. However, growing security concerns are leading governments and corporations to impose stricter controls on their use.
Amazon is set to unveil its long-awaited generative AI-powered Alexa, with a preview event scheduled for 26 February in New York. The update marks the most significant overhaul since the voice assistant’s launch in 2014, aiming to improve user interactions with advanced AI-driven conversations. A final decision on the product’s readiness is expected at an internal meeting on 14 February.
The new AI capabilities will allow Alexa to handle multiple requests in sequence and act on behalf of users without direct input. While initially free for a limited number of users, Amazon is considering a monthly subscription fee of $5 to $10. The company will continue offering the existing version, known as Classic Alexa, though it has reportedly stopped adding new features to it.
Despite Alexa’s early success, usage has remained limited due to a lack of major updates in recent years. The generative AI revamp is designed to make Alexa more useful for tasks like shopping, scheduling, and entertainment. Analysts suggest that even a fraction of users subscribing to the service could generate significant revenue for Amazon.
The update will rely on AI software from Anthropic, a startup backed by Amazon’s $8 billion investment. Previous attempts to launch an improved Alexa were delayed due to concerns over accuracy and performance. With the upcoming release, Amazon hopes to re-establish Alexa as a key part of everyday digital interactions.
Luca Casarini, a prominent Italian migrant rescue activist, was warned by Meta that his phone had been targeted with spyware. The alert was received through WhatsApp, the same day Meta accused surveillance firm Paragon Solutions of using advanced hacking methods to steal user data. Paragon, reportedly American-owned, has not responded to the allegations.
Casarini, who co-founded the Mediterranea Saving Humans charity, has faced legal action in Italy over his rescue work. He has also been a target of anti-migrant media and previously had his communications intercepted in a case related to alleged illegal immigration. He remains unaware of who attempted to hack his device or whether the attack had judicial approval.
The revelation follows a similar warning issued to Italian journalist Francesco Cancellato, whose investigative news outlet, Fanpage, recently exposed far-right sympathies within Prime Minister Giorgia Meloni’s political youth wing. Italy’s interior ministry has yet to comment on the situation.
Australia has banned Chinese AI startup DeepSeek from all government devices, citing security risks. The directive, issued by the Department of Home Affairs, requires all government entities to prevent the installation of DeepSeek’s applications and remove any existing instances from official systems. Home Affairs Minister Tony Burke stated that the immediate ban was necessary to safeguard Australia’s national security.
The move follows similar action taken by Italy and Taiwan, with other countries also reviewing potential risks posed by the AI firm. DeepSeek has drawn global attention for its cost-effective AI models, which have disrupted the industry by operating with lower hardware requirements than competitors. The rapid rise of the company has raised concerns over data security, particularly regarding its Chinese origins.
This is not the first time Australia has taken such action against a Chinese technology firm. Two years ago, the government imposed a nationwide ban on TikTok for similar security reasons. As scrutiny over AI intensifies, more governments may follow Australia’s lead in limiting DeepSeek’s reach within public sector networks.