The surge in AI adoption, coupled with low AI literacy and weak governance is creating a complex risk environment, with many organisations deploying AI without proper consideration to what is needed to ensure transparency, accountability and ethical oversight.

乐鱼(Leyu)体育官网 has partnered with the University of Melbourne to produce the most comprehensive global study into the public鈥檚 trust, use and attitudes towards AI.

This year, the survey included more than 48,000 people and has been expanded to 47 countries, to provide a deeper level of insight into the perceptions of AI across the globe.

What is New Zealand's attitude towards AI?


Calls for greater governance

The research found strong public support for AI regulation with 81% of New Zealanders believing regulation is required. Specifically, 89% want laws and action to combat AI-generated misinformation. Only 23% believe current safeguards are sufficient to make AI use safe and New Zealanders expect a comprehensive regulatory approach to AI.

New Zealanders are less trusting and positive about AI than most countries

As well as being wary of AI, New Zealand ranks among the lowest globally on acceptance, excitement and optimism about it, alongside Australia and the Netherlands. Only 44% of New Zealanders believe the benefits of AI outweigh the risks, the lowest ranking of any country. New Zealand trails behind other countries in realising the benefits of AI (54% vs 73% globally report experiencing benefits).

New Zealand is lagging in AI literacy

New Zealanders have amongst the lowest levels of AI training and education, with just 24% having undertaken AI-related training or education compared to 39% globally. Only 36% believe they have the skills to use AI tools appropriately (60% globally).

Understanding AI: The big picture

The 乐鱼(Leyu)体育官网 and University of Melbourne Trust in AI Survey Report 2025 paints a complex picture of how people feel about AI. 

While AI use is high, trust and literacy levels are quite varied among different countries, with emerging economies leading in both areas. Concerns about AI risks are prevalent, highlighting the need for effective regulation and governance.

Overall, there is a clear ambivalence towards AI. People appreciate its technical capabilities yet remain cautious about its safety and societal impact. This complex mix of feelings has led to moderate to low acceptance of AI and an increase in worry over time. 

Given the huge potential of AI technologies, careful management and understanding public expectations around regulation and governance will be pivotal in guiding its responsible development and use.

FAQs

Trust in artificial intelligence (AI) refers to the willingness to rely on an AI system based on positive expectations of its performance and ethical behaviour. This includes the system鈥檚 technical ability to provide accurate and reliable outputs, as well as its safety, security, and ethical use. Trust is crucial because it underpins the acceptance and sustained adoption of AI systems.

People have mixed perceptions about AI taking over the workforce. While many recognise the efficiency and innovation AI can bring, there are significant concerns about job loss, deskilling, and dependency on AI.

Almost half of the respondents believe AI will eliminate more jobs than it will create, and many are worried about being left behind if they don鈥檛 use AI at work. However, there is also a recognition of the potential benefits, such as improved efficiency and decision-making.

Yes, people's attitudes towards AI have changed over time. Trust in AI has declined since the release of Gen AI, with increased concerns about its risks and impacts. However, many still recognise its benefits, such as increased efficiency and innovation.

The research was conducted using an online survey completed by representative research panels in each country between November 2024 and mid-January 2025. The study surveyed 48,340 people across 47 countries, covering all global geographical regions.

About the research

The team behind the research

Professor Nicole Gillespie, Dr Steve Lockey, Alexandria Macdade, Tabi Ward, and Gerard Hassed.

The University of Melbourne research team led the design, conduct, data collection, analysis, and reporting of this research.

Acknowledgments

Advisory group: James Mabbott, Jessica Wyndham, Nicola Stone, Sam Gloede, Dan Konigsburg, Sam Burns, Kathryn Wright, Melany Eli, Rita Fentener van Vlissingen, David Rowlands, Laurent Gobbi, Rene Vader, Adrian Clamp, Jane Lawrie, Jessica Seddon, Ed O鈥橞rien, Kristin Silva, and Richard Boele.

We are grateful for the insightful expert input and feedback provided at various stages of the research by Ali Akbari, Nick Davis, Shazia Sadiq, Ed Santow, Tapani Rinta-Kahila, Alice Rickert, Lucy Kenyon-Jones, Morteza Namvar, Olya Ohrimenko, Saeed Akhlaghpour, Chris Ziguras, Sam Forsyth, Geoff Dober, Giles Hirst, and Madhava Jay.

We appreciate the data analysis support provided by Jake Morrill. 

Report production: Kathryn Wright, Melany Eli, Bethany Fracassi, Nancy Stewart, Yong Dithavong, Marty Scerri and Lachlan Hardisty.

Citation

Gillespie, N., Lockey, S., Ward, T., Macdade, A., Hassed, G. (2025). Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025. The University of Melbourne and 乐鱼(Leyu)体育官网. 

Funding

This research was supported by the Chair in Trust research partnership between the University of Melbourne and 乐鱼(Leyu)体育官网 Australia, and funding from 乐鱼(Leyu)体育官网 International, 乐鱼(Leyu)体育官网 Australia, and the University of Melbourne.  

The research was conducted independently by the university research team. 

Lead researcher

Nicole Gillespie Chair of Trust, Melbourne Business School
The University of Melbourne