NEXTGENNAV THE GLOBAL GOALS REPORT GLOBAL YOUTH AI 20 25 Table of Contents Introduction / About Us 01 Our SDGs 02 How We Achieve Each 03 SDG 4 04 SDG 9 05 SDG 10 06 SDG 16 07 Demographics 08 Conclusion / Acknowledgements 09 Published by NextGenNav Credits: All photos and media used in this report (unless stated otherwise) come from United Nations entities or are taken by members of the NextGenNav team. Designed by: NextGenNav USA Team Shawn Zhu (Lead) Derin Sezgin Adhvik Vakulabharanam Brody Golder-Marinari Email: nextgennavai@gmail.com Website: nextgennav.org NEXTGENNAV GLOBAL YOUTH AI REPORT 2025 youth engaged with in-person and virtual programs 5 k + partnering organizations 7 + E T H I C A L Our Framework for Problem Solving Evaulate Target Highlight Innovate Create Apply Lead Nova Workshop 7+ Countries Engaged Artificial intelligence and emerging technologies are playing key roles in shaping industries, governments, and everyday life. Concerning the UN’s Sustainable Development Goals (SDGs) released in 2015, there are key areas of development needed for the sustainability of our world. Executive Summary: The report captures the youth perspective about different AI topics, including education, sustainability, inequality, and ethics. The report takes survey data received from 244 youth (people under 30 years of age) across the globe. Respondents are knowledgeable about AI systems. Respondents are also only somewhat comfortable with the rapid speed at which AI is developing, and want more regulations and data security. AI systems are generally accessible to respondents, but access gaps exist for AI in the economic sectors of vulnerable countries. Respondents see negative impacts of AI systems often, with the main contributors being misinformation and misuse. Most respondents are at least somewhat hopeful that we as a society can use AI ethically. Brody Golder-Marinari Derin Sezgin Shawn Zhu (Lead) Adhvik Vakulabharanam NEXTGENNAV GLOBAL YOUTH AI REPORT 2025 Introduction At NextGenNav, we believe in educating the youth to better prepare themselves to handle AI and tech systems with integrity and responsibility. We aim to create a future that seamlessly integrates existing AI usage with current Sustainable Development Goals (SDGs). 01 Our SDGs An understanding of the youth AI perspective per SDGs A NextGenNav workshop engaging with students from an elementary school about how AI’s work. NEXTGENNAV GLOBAL YOUTH AI REPORT 2025 02 SDG #9 involves building sustainable communities, which we interpret as the facilitation of tech innovation. We seek to encourage sustainable systems that communities can easily access SDG #10 involves reducing inequalities. We seek to reduce the digital divide in developing countries and give equal opportunities to youth PRIMARY SDG Our core SDG involves affirming quality education. High-quality education on the ethical use of AI and tech literacy helps foster development of other SDGs. Our SDGs - how we achieve each The Secretary General’s report on Our Common Agenda highlights the need to “listen to and work with youth.” Technology and artificial intelligence are critical areas where youth voices must be heard. Our survey focuses on youth perspectives of AI relating to the Sustainable Development Goals (SDGs). We picked four out of the 17 SDGs that we felt were related with survey responses the most (4, 9, 10, 16). By using survey questions, overarching questions (on the right) can be made to better understand the SDGs. SECO NDARY SDGS SDG #16 involves promoting peace, justice, and strong institutions. We seek to equip youth with knowledge to use and understand responsible technology, fostering transparent and accountable communities. THE OVERARCHING SURVEY QUESTIONS, GROUPED BY EACH SDG WE ADDRESS How knowledgeable are youth on AI systems? How can we ensure we develop sustainable AI systems? How accessible are AI systems for youth? What instances are youth negatively impacted by AI systems? 4 QUESTIONS 3 QUESTIONS 3 QUESTIONS 3 QUESTIONS NEXTGENNAV GLOBAL YOUTH AI REPORT 2025 03 0 20 40 60 80 100 Content Generation Biometrics Medical Recommender Systems Autonomous Vehicles 81 65 54 78 68 4 43% 3 32.6% 5 15.9% 2 6.3% 1 2.2% Nova Workshop presenters explaining the basics of how AI works. NEXTGENNAV GLOBAL YOUTH AI REPORT 2025 For youth: 91% are at least somewhat familiar with AI 88% learned about AI from online resources 88% believe their peers are at least somewhat familiar with AI Youth surveyed have high amounts of literacy and familiarity related to AI. For familiarity, most respondents cite that they have heard about AI or have learned about it f rom online resources (giving a familiarity rating of 3 or above on a 1-5 scale). Respondents also report that specif ic AI systems have high levels of familiarity in sectors such as AI in content generation and AI algorithms that recommend content on platforms such as YouTube and TikTok. Others, such as AI in medical technology, are less well-known to young people. Additionally, youth perceive that their peers are also familiar with AI themselves, highlighting the exposure of AI systems in their communities. These metrics correspond highly to the UN’s Annual Report of 2024, where the global primary school completion rate was at 88% in 2023, highly corresponding to our youth AI knowledge metrics. The results suggest that respondents are very knowledgeable about most AI systems. Figure 1: Youth familiarity with all AI from a scale of 1-5 Figure 2: Youth familiarity (in %) on specific AI systems Do youth know about AI systems? THE 4 QUESTIONS How familiar are you with artif icial intelligence? Are you familiar/heard of these AI tools? (dropdown list) How did you learn about these AI tools? How knowledgeable do you think your peers are in artif icial intelligence? 04 3 47.8% 4 23.5% 2 12.3% 1 9.7% 5 6.7% When asked about who should be responsible for managing AI systems, about half (135) of the respondents believe the researchers who create the AI system itself should be responsible. Additional interesting data points were that 55 respondents believed international bodies like the UN should be responsible. Respondents also have a very neutral level of comfort on AI development (with most giving a comfort rating of 3 or above on a 1-5 scale). Respondents’ most common responses suggested more rules and regulations, and that private data should be kept secure. Other responses called for less discrimination and bias within AI systems, and more human involvement and control over (explainable AI). Per the UN’s Annual SDG Report of 2024 , there is a growing recognition for global governance of artif icial intelligence to ensure its alignment with human rights. From the results, it is clear that respondents prefer sustainable AI systems to be a balance of efforts between both researchers and international bodies, and that there should be rules and regulations, and security of data. Figure 1: Comfort Levels With AI Development Speed On A Scale Of 1-5 NEXTGENNAV GLOBAL YOUTH AI REPORT 2025 How can we ensure we develop sustainable AI systems? Public trust in AI is heavily dependent on a balance of efforts between both researchers and international bodies. “ ” For youth: 48% have “neutral” levels of comfort with AI 55% believe creators should be responsible for AI 60% believe private data should be secured Who should be held responsible for managing AI systems? (select all that apply) Are you comfortable with how fast AI is developing? What would make you more comfortable with AI development? THE 3 QUESTIONS 68% believe there should be more rules and regulations ~ Anonymous respondent 05 0 20 40 60 80 100 Chatbots Search Engines Writing Assistant AI Assistants Image Gen. Music Gen. 81 76 59 57 20 5 Youth surveyed cite that chatbots (ChatGPT, Claude, DeepSeek) and search engines (Google, Bing) are the most commonly used AI, with these two appearing as a pair in a multi-choice question for approximately 130 responses. By contrast, few respondents use AI for music generation and standalone image generation. Youth around the world use AI for similar tasks. They cite education, entertainment, and social connections as key use cases. The differences in percentages between the United States and Burkina Faso in each category were less than 3%. Uniquely, respondents who report using AI for jobs or volunteer work mainly come from the United States. However, 23% of respondents in Burkina Faso do not use AI for tasks such as jobs or volunteer work. Per the UN’s Annual SDG Report of 2024 , there are increased concerns of a widening income gap in wealthy countries. The results show that AI systems are generally accessible to respondents, but access gaps exist for AI in the economic sectors of vulnerable countries. How accessible are AI systems for youth? For youth: 91% of respondents use AI for educational uses 81% widely use chatbots as their go-to AI 40% from The United States use AI for jobs / volunteering Figure 1: Youth usage (in %) of specific AI systems 23% from Burkina Faso use AI for jobs / volunteering What types of AI have you used / are using in your daily life? (Click all that apply) If you responded to the above question, why do you use AI? (Click all that apply) Where do you use AI for educational purposes? THE 3 QUESTIONS NextGenNav highlights youth-led organizations that are expanding access to AI and technology within their local communities. 06 0 20 40 60 80 100 Misinfo. Misuse Bias Security Other 64 63 25 8 8 Many youth noticed negative effects from AI usage, with 60% reporting seeing negative effects. Uniquely, Burkina Faso had more respondents cite multiple types of negative concerns than any other country. Overall, respondents cite misinformation and misuse as common negative effects of AI systems, with them being 64% and 63% respectively of those who have seen negative impacts of AI systems. Most respondents believe that society will be able to use AI ethically, with 78% giving a belief rating of 3 or above on a 1-5 scale. Additionally, when asked to rank AI risk factors from highest risk (1) to lowest risk (6), the factor deemed the highest risk was misinformation, with a risk rating of 2.66 among all participants. The next two highest were security risk at a score of 2.72, and malicious misuse at a score of 2.76. Interestingly, with a score of 4.98, AI contributing to inequality was deemed the lowest risk factor out of all options. Overall, respondents see negative impacts of AI systems often, with the main contributors being misinformation and misuse. Most respondents are at least somewhat hopeful that we as a society can use AI ethically. What instances are youth negatively impacted by AI systems? 3 38.3% 4 26.3% 2 19.1% 5 12.9% 1 3.3% NEXTGENNAV GLOBAL YOUTH AI REPORT 2025 Have you experienced / seen a situation where an AI negatively impacted someone? Do you believe we as a society can use Artif icial Intelligence ethically? Why did you respond the way you did for the question above?(optional) Rank these AI risks f rom highest risk (1) to lowest risk (6) THE 3 QUESTIONS 60% have seen negative impacts of AI systems 78% are at least somewhat hopeful we can use AI ethically For youth: Figure 2: (%) of youth that were negatively affected by AI systems Figure 1: Belief of Society Being Able to Use AI Ethically On A Scale Of 1-5 53% ranked misinformation as a top 3 risk for AI 18% ranked inequality as a top three risk for AI A Nova Workshop student’s design of a dog communication device with user security in mind. 07 NEXTGENNAV GLOBAL YOUTH AI REPORT 2025 Male 65.1% Female 34.9% Figure 1: Participant’s self-reported gender United States of America 68.4% Burkina Faso 20.9% Australia 2.3% Brazil 0.9% Yemen 0.5% 08 Figure 2: Participant’s country of residence Other participating countries (not visualized on graph) India (1.9%) United Kingdom: 1.9% Bahamas: 0.5% Zimbabwe: 0.5% Pakistan: 0.5% Benin: 0.5% Malaysia: 0.5% Sri Lanka: 0.5% Canada: 0.5% Demographics Respondents are knowledgeable about AI systems. Respondents are somewhat comfortable with the rapid speed at which AI is developing, and thus desire for more regulations and data security with technology. AI systems are generally accessible to respondents, but access gaps exist for AI in the economic sectors of vulnerable countries. Respondents see negative impacts of AI systems often, with the main contributors being misinformation and misuse. Most respondents are at least somewhat hopeful that we as a society can use AI ethically. Conclusion / Acknowledgements The results of the survey reached the following conclusions: SDG 4 SDG 9 SDG 10 SDG 16 THANKFUL SUPPORT TO: 09 Data Science Student Society SecuraAI