Survey finds developers value AI for ideas, not final answers
Developers embrace AI cautiously, using it more for learning than production code.

As AI becomes more integrated into developer workflows, a new report shows that trust in AI-generated results erodes. According to Stack Overflow’s 2025 Developer Survey, the use of AI has increased to 84%, up from 76% last year. However, trust in its output has dropped, especially among experienced professionals.
The survey found that 46% of developers now lack trust in AI-generated answers.
That figure marks a sharp increase from 31% in 2024, suggesting growing scepticism despite higher adoption. By contrast, only 3.1% of developers trust AI responses.
Interestingly, trust varies with experience. Beginners were twice as likely to express high confidence in AI, with 6.1% reporting strong trust, compared with just 2.5% among seasoned developers. The results indicate a divide in how AI is perceived across the developer landscape.
Despite doubts, developers continue to use AI tools across various tasks. The vast majority – 78.5% – use AI on an infrequent basis, such as once a month. The pattern holds across experience levels, suggesting cautious and situational usage.
While trust is lacking, developers still see AI as a helpful starting point. Three in five respondents reported favourable views of AI tools overall. One in five viewed them negatively, with the remaining 20% remaining neutral.
However, that usefulness has limits. Developers were quick to seek human input when unsure about AI responses. Seventy-five percent said they would ask someone when they didn’t trust an AI-generated answer. Fifty-eight percent preferred human advice when they didn’t fully understand a solution.
Ethics and security were also areas where developers preferred human judgement. Again, 58% reported turning to colleagues or mentors to evaluate such risks. Such cases show a continued reliance on human expertise in high-stakes decisions.
Stack Overflow CEO Prashanth Chandrasekar acknowledged the limitations of AI in the development process. ‘AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance,’ he said. He added that AI best uses a ‘trusted human intelligence layer’.
The data also revealed that developers may not trust AI entirely but use it to support learning.
Forty-four percent of respondents admitted using AI tools to learn how to code, up from 37% last year.
A further 36% use it for work-related growth or career advancement.
The results highlight the role of AI as an educational companion rather than a coding authority.
It can help users understand concepts or generate basic examples, but most still want human review.
That distinction matters as teams consider how to integrate AI into production workflows.
Some developers are concerned that overreliance on AI could reduce the depth of their problem-solving skills. Others worry about hallucinations — AI-generated content that appears accurate but is misleading or incorrect. Such risks have led to a cautious, layered approach to using AI tools in real-life projects.
Stack Overflow’s findings align with broader AI adoption and trust industry trends. Tech firms are exploring ways to integrate AI safely, but many prioritise transparency and human oversight. Chandrasekar believes developers are uniquely positioned to help shape AI’s future revolution.
‘By providing a trusted human intelligence layer in the age of AI, we believe the tech enthusiasts of today can play a larger role in adding value,’ he said. ‘They’ll help build the AI technologies and products of tomorrow.’
As AI continues to expand into software development, one thing is clear: trust matters. Developers are open to using AI – but only when it supports, rather than replaces, human judgement. The challenge now is building systems that earn and maintain that trust.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!