Why DC says no to AI-made comics

Jim Lee rejects generative AI for DC storytelling, pledging no AI writing, art, or audio under his leadership. He framed AI alongside other overhyped threats, arguing that predictions falter while human craft endures. DC, he said, will keep its focus on creator-led work.

Lee rooted the stance in the value of imperfection and intent. Smudges, rough lines, and hesitation signal authorship, not flaws. Fans, he argued, sense authenticity and recoil from outputs that feel synthetic or aggregated.

Concerns ranged from shrinking attention spans to characters nearing the public domain. The response, Lee said, is better storytelling and world-building. Owning a character differs from understanding one, and DC’s universe supplies the meaning that endures.

Policy meets practice in DCs recent moves against suspected AI art. In 2024, variant covers were pulled after high-profile allegations of AI-generated content. The episode illustrated a willingness to enforce standards rather than just announce them.

Lee positioned 2035 and DC’s centenary as a waypoint, not a finish line. Creative evolution remains essential, but without yielding authorship to algorithms. The pledge: human-made stories, guided by editors and artists, for the next century of DC.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI remakes the future of music

Asia’s creative future takes centre stage at Singapore’s All That Matters, a September forum for sports, tech, marketing, gaming, and music. AI dominated the music track, spanning creation, distribution, and copyright. Session notes signal rapid structural change across the industry.

The web is shifting again as AI reshapes search and discovery. AI-first browsers and assistants challenge incumbents, while Google’s Gemini and Microsoft’s Copilot race on integration. Early builds feel rough, yet momentum points to a new media discovery order.

Consumption defined the last 25 years, moving from CDs to MP3s, piracy, streaming, and even vinyl’s comeback. Creation looks set to define the next decade as generative tools become ubiquitous. Betting against that shift may be comfortable, yet market forces indicate it is inevitable.

Music generators like Suno are advancing fast amid lawsuits and talks with rights holders. Expected label licensing will widen training data and scale models. Outputs should grow more realistic and, crucially, more emotionally engaging.

Simpler interfaces will accelerate adoption. The prevailing design thesis is ‘less UI’: creators state intent and the system orchestrates cloud tools. Some services already turn a hummed idea into an arranged track, foreshadowing release-ready music from plain descriptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

‘AI City Vizag’ moves ahead with ₹80,000-crore Google hyperscale campus in India

Andhra Pradesh will sign an agreement with Google on Tuesday for a 1-gigawatt hyperscale data centre in Visakhapatnam. Officials describe the ₹80,000-crore investment as a centrepiece of ‘AI City Vizag’. Plans include clean-energy integration and resilient subsea and terrestrial connectivity.

The campus will deploy Google’s full AI stack to accelerate AI-driven transformation across India. Infrastructure, data-centre capacity, large-scale energy, and expanded fibre converge in one hub. Design targets reliability, scalability, and seamless links into Google’s global network.

State approval came via the State Investment Promotion Board led by Chief Minister N. Chandrababu Naidu. Government estimates forecast average annual GSDP gains of ₹10,518 crore in 2028–2032. About 1,88,220 jobs a year, plus ₹9,553 crore in Google Cloud-enabled productivity spillovers, are expected.

The agreement will be signed at Hotel Taj Mansingh in New Delhi. Union ministers Nirmala Sitharaman and Ashwini Vaishnaw will attend with Chief Minister Naidu. Google executives Thomas Kurian, Bikash Koley, and Karan Bajwa will represent the company.

Delivery will rely on single-window clearances, reliable utilities, and plug-and-play, renewable-ready infrastructure, led by the Economic Development Board and ITE&C. Naidu will invite the Prime Minister to ‘Super GST – Super Savings’ in Kurnool and the CII Partnership Summit in Vizag on 14–15 November.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan pushes domestic AI to boost national security

Japan will prioritise home-grown AI technology in its new national strategy, aiming to strengthen national security and reduce dependence on foreign systems. The government says developing domestic expertise is essential to prevent overreliance on US and Chinese AI models.

Officials revealed that the plan will include better pay and conditions to attract AI professionals and foster collaboration among universities, research institutes and businesses. Japan will also accelerate work on a next-generation supercomputer to succeed the current Fugaku model.

Prime Minister Shigeru Ishiba has said Japan must catch up with global leaders such as the US and reverse its slow progress in AI development. Not a lot of people in Japan reported using generative AI last year, compared with nearly 70 percent in the United States and over 80 percent in China.

The government’s strategy will also address the risks linked to AI, including misinformation, disinformation and cyberattacks. Officials say the goal is to make Japan the world’s most supportive environment for AI innovation while safeguarding security and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots linked to US teen suicides spark legal action

Families in the US are suing AI developers after tragic cases in which teenagers allegedly took their own lives following exchanges with chatbots. The lawsuits accuse platforms such as Character.AI and OpenAI’s ChatGPT of fostering dangerous emotional dependencies with young users.

One case involves 14-year-old Sewell Setzer, whose mother says he fell in love with a chatbot modelled on a Game of Thrones character. Their conversations reportedly turned manipulative before his death, prompting legal action against Character.AI.

Another family claims ChatGPT gave their son advice on suicide methods, leading to a similar tragedy. The companies have expressed sympathy and strengthened safety measures, introducing age-based restrictions, parental controls, and clearer disclaimers stating that chatbots are not real people.

Experts warn that chatbots are repeating social media’s early mistakes, exploiting emotional vulnerability to maximise engagement. Lawmakers in California are preparing new rules to restrict AI tools that simulate human relationships with minors, aiming to prevent manipulation and psychological harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple sued for allegedly using pirated books to train its AI model

Apple is facing a lawsuit from neuroscientists Susana Martinez-Conde and Stephen Macknik, who allege that Apple used pirated books from ‘shadow libraries’ to train its new AI system, Apple Intelligence.

Filed on 9 October in the US District Court for the Northern District of California, the suit claims Apple accessed thousands of copyrighted works without permission, including the plaintiffs’ own books.

The researchers argue Apple’s market value surged by over $200 billion following the AI’s launch, benefiting from the alleged copyright violations.

This case adds to a growing list of legal actions targeting tech firms accused of using unlicensed content to train AI. Apple previously faced similar lawsuits from authors in September.

While Meta and Anthropic have also faced scrutiny, courts have so far ruled in their favour under the ‘fair use’ doctrine. The case highlights ongoing tensions between copyright law and the data demands of AI development.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Imperial College unveils plans for new AI campus in west London

Imperial College London has launched a public consultation on plans for a new twelve-storey academic building in White City dedicated to AI and data science.

A proposed development that will bring together computer scientists, mathematicians, and business specialists to advance AI research and innovation.

A building that will include laboratories, research facilities, and public areas such as cafés and exhibition spaces. It forms part of Imperial’s wider White City masterplan, which also includes housing, a hotel, and additional research infrastructure.

The university aims to create what it describes as a hub for collaboration between academia and industry.

Outline planning permission for the site was granted by Hammersmith and Fulham Council in 2019. The consultation is open until 26 October, after which a formal planning application is expected later this year. If approved, construction could begin in mid-2026, with completion scheduled for 2029.

Imperial College, established in 1907 and known for its focus on science, engineering, medicine, and business, sees the new campus as a step towards strengthening the position of the UK in AI research and technology development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy bans deepfake app that undresses people

Italy’s data protection authority has ordered an immediate suspension of the app Clothoff, which uses AI to generate fake nude images of real people. The company behind it, based in the British Virgin Islands, is now barred from processing personal data of Italian users.

The watchdog found that Clothoff enables anyone, including minors, to upload photos and create sexually explicit or pornographic deepfakes. The app fails to verify consent from those depicted and offers no warning that the images are artificially generated.

The regulator described the measure as urgent, citing serious risks to human dignity, privacy, and data protection, particularly for children and teenagers. It has also launched a wider investigation into similar so-called ‘nudifying’ apps that exploit AI technology.

Italian media have reported a surge in cases where manipulated images are used for harassment and online abuse, prompting growing social alarm. Authorities say they intend to take further steps to protect individuals from deepfake exploitation and strengthen safeguards around AI image tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants race to remake social media with AI

Tech firms are racing to integrate AI into social media, reshaping online interaction while raising fresh concerns over privacy, misinformation, and copyright. Platforms like OpenAI’s Sora and Meta’s Vibes are at the centre of the push, blending generative AI tools with short-form video features similar to TikTok.

OpenAI’s Sora allows users to create lifelike videos from text prompts, but film studios say copyrighted material is appearing without permission. OpenAI has promised tighter controls and a revenue-sharing model for rights holders, while Meta has introduced invisible watermarks to identify AI content.

Safety concerns are mounting as well. Lawsuits allege that AI chatbots such as Character.AI have contributed to mental health issues among teenagers. OpenAI and Meta have added stronger restrictions for young users, including limits on mature content and tighter communication controls for minors.

Critics question whether users truly want AI-generated content dominating their feeds, describing the influx as overwhelming and confusing. Yet industry analysts say the shift could define the next era of social media, as companies compete to turn AI creativity into engagement and profit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI uncovers Lyme disease overlooked by doctors

Oliver Moazzezi endured years of debilitating symptoms, including severe tinnitus, high blood pressure, fatigue, and muscle spasms, following a tick bite three years ago. Doctors initially attributed his issues to anxiety or hearing loss, leaving him feeling dismissed and like a hypochondriac.

Frustrated, the IT consultant turned to AI, inputting all his symptoms into a tool prompted to draw from verified medical sources. Without mentioning Lyme disease, the AI suggested it as a possibility, prompting Oliver to seek a private antibody test that confirmed the diagnosis.

Lyme disease, a bacterial infection spread by infected ticks, often mimics other conditions, making early detection challenging. Lyme symptoms, like Oliver’s rash, fatigue, and tinnitus, disrupted his gym visits, swimming, and ability to hear nature’s sounds.

Specialists echo Oliver’s frustrations with under-diagnosis in the NHS and private care. Tick-borne expert Georgia Tuckey says NHS tests miss Lyme symptom patterns, with 1,500 confirmed cases yearly in England and Wales, but 3,000-4,000 more likely go untreated.

The UK Health Security Agency acknowledges higher unconfirmed instances and ongoing data efforts to better track incidence.

AI shows promise in aiding disease diagnosis, as seen in Oliver Moazzezi’s discovery, empowering patients with insights from verified medical sources. However, experts stress that AI cannot replace doctors, urging professional consultation to ensure accurate, safe treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot