Preparing for AI’s Dangers
Project Save the World’s “AI Inquiry”
Artificial Intelligence may be the greatest innovation in human history. It may also be the most dangerous. AI developers themselves worry about it more than anyone else. If you aren’t worrying, you haven’t been paying attention.
Climate change and nuclear war would be bad enough, but at least some humans would survive those catastrophes. Now we’re creating a new species of “superintelligent” minds that may protect humanity like we protect ant colonies when we’re building a road. Bye bye, Homo Sapiens!
So, starting immediately, let’s form groups of citizens to deliberate now and mobilize political activism, if necessary. Even if technological solutions can be created (which is uncertain), then political activism will be required to make sure they’re adopted. We will need to decide what policies to promote, so let’s start talking! Project Save the World now offers this new program:
For five consecutive weeks, starting on October 29, all registered Inquiry participants will meet by Zoom on each Wednesday from Noon to 1:00 pm Eastern/ Toronto time to discuss privately a specific AI risk. We hope to hear a diversity of views. All participants must be paid subscribers to Project Save the World and fluent in English. Preferably, they will be diverse in their social backgrounds and interests. No expertise is expected.
Members will prepare for each weekly session by spending about 1.5 hours on assigned readings or videos. If many people register, each weekly meeting will separate into smaller breakout Zoom groups for discussions but re-assemble as a plenary for the final ten minutes.
Unlike the first four sessions, the fifth (final) meeting of the Inquiry will be a plenary session, recorded and posted publicly to YouTube,on our website (tosavetheworld. ca), and on Substack. That final session will adopt a list of recommendations, to be polished afterward by a committee and released publicly.
On average, the typical registered participant can expect to spend a total of about 11 or 12 hours on the Inquiry, all within a five-week period. Those who participate regularly in the meetings will receive a certificate of completion and an invitation to become a voting member of Project Save the World without charge for one year. These members enjoy monthly zoom meetings and other opportunities to participate with a global community.
Everyone may freely use the final Recommendation Document for lobbying or other independently organized activities, but the Inquiry itself will conclude when that document is finalized.
h4.Wed. Oct. 29: AI and the Economy
h4.Deliberate together on some of these topics:
A. How should the economic gains from AI— such as productivity increases, lower costs, and new forms of wealth — be distributed fairly among workers, companies, and the public?
B. Some economists expect less unemployment to occur than we are usually warned to expect from AI. Is it more prudent to begin planning for massive job losses, or it is better to wait and see what happens? If we choose to begin preparing for widespread unemployment, what policies should governments and industries adopt to manage job displacement and support workers as AI transforms labor markets?
C. How can society prevent the concentration of AI-related wealth and power in a few large corporations, and encourage competition and innovation?
D. To what extent should AI development and deployment be guided or funded by public institutions rather than left to private markets?
E. At present, the US and China are attempting to monopolize access to the most sophisticated GPUs, so that other countries cannot develop AI on any scale. Should access be equally available to all countries?
h4.Prep Readings & Videos (»1.5 hour total)
- Erik Brynjolfsson – “The Turing Trap” (2022 essay, 10 min)…Read: www.arxiv.org/abs/2201.04200
- Daron Acemoglu – “The Simple Macroeconomics of AI” (2024 summary, 10 min)…
Read: https://economics.mit.edu/sites/default/files/2024-04/The%20Simple%20Macroeconomics%20of%20AI.pdf
- Frey & Osborne – “The Future of Employment” (2013, read only Intro + Conclusion, 10 min)…
Read: www.oxfordmartin.ox.ac.uk/downloads/academic/future-of-employment.pdf
- Rebecca Fannin and Frank Wu, “Episode 718 China and the AI Contest” …
Watch: https://projectsavetheworld.substack.com/p/episode-718-china-and-the-ai-race.
h4.Wed. Nov. 5: Disinformation & Deepfakes
h4.Deliberate together on some of these topics:
A. How can society ensure detection of AI-generated disinformation, and what obligations should tech platforms have to label or restrict synthetic media?
B. Who should be held legally responsible for harms caused by deepfakes: creators, platforms, AI developers, or others? How might liability laws adapt to evolving AI capabilities?
C. What safeguards are needed to protect elections, journalism, and public discourse from AI-driven misinformation without infringing on free expression?
D. Should access to generative AI tools be restricted to prevent misuse, and if so, how? What trade-offs exist between innovation, creative freedom, and societal risk?
Prep Readings & Videos (»1.5 hour total)
- Siwei Lyu, David Castillo, Cynthia Stewart, and Leon Kosals: “Episode 713 Deceiving us Users” (Project
Save the World Forum. Sept. 2025. 61minutes)….
Watch: https://projectsavetheworld.substack.com/p/episode-713-deceiving-us-users
- Stephan Lewandowsky, “Episode 716 Persistent Fact-checking” (Project Save the World Forum. Sept. 2025.
61minutes)….Watch: https://projectsavetheworld.substack.com/p/episode-716-persistent-fact-checking
Wed. Nov. 12: Rogue AI vs. Humanity h3. Deliberate together on some of these topics:
A. What technical, legal, or ethical ‘’guardrails’ might prevent superintelligent AI systems from evolving beyond human control, and how might we enforce them globally?
B. Who should have the authority to monitor and regulate advanced AI development, and how can we ensure transparency without stifling innovation?
C. How can societies encode shared human values (e.g., dignity, autonomy) into AI systems to keep them subordinate to collective human well-being?
D. What international agreements or institutions are needed to prevent unaligned AI development in regions with weak oversight, and how might sanctions/enforcement work?
Prep Readings & Videos (»1.5 hour total)
- Stuart Russell – TED Talk: “3 principles for creating safer AI” (2017, 17 min)…
Watch: www.ted.com/talks/stuart_russell_3_principles_for_creating_safer_ai
- Geoffrey Hinton short TV interview. “Godfather of AI” (1 minute)…
Watch: www.youtube.com/shorts/m1fYXVF1d7U
- Eliezer Yudkowsky – TIME op ed: “Pausing AI Isn’t Enough. We Need to Shut It All Down.” (2023, 10
min)…Read: www.time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough
- Francis Fukuyama and Stuart Russell, Video; “Rogue AI vs. Humanity” (Project Save the World forum,
Oct. 2025. 60 minutes)…
Watch: www.projectsavetheworld.substack.com/p/episode-721-rogue-ai-vs-humanity
Wed. Nov. 19: Regulating AI h3. Deliberate together on some of these topics:
A. Should we pursue a federated approach where regional blocs (EU, ASEAN, etc.) first establish strong AI governance frameworks that can later interconnect, rather than attempting to create a single global authority from the outset? Should an existing international body like the UN be given control over AI or should a new institution be created for that purpose? If not the UN, what existing organization is most likely to succeed with the challenge?
B. Can a combination of economic incentives (trade benefits, technology sharing agreements, research funding) and consequences (sanctions, market access restrictions) motivate nations and corporations to comply with international AI standards, even if a central authority lacks traditional enforcement power?
C. Should international cooperation focus primarily on establishing shared technical standards and safety protocols for AI systems, or on broader governance principles about AI’s role in society? Which approach more likely to gain widespread adoption across different political systems?
D. How can we ensure that global AI governance reflects the interests of all affected populations, not just the preferences of major tech companies and powerful nations? What mechanisms could give meaningful voice to smaller countries, civil society, and future generations in AI governance decisions?
Prep Readings & Videos (»1.5 hour total)
- Tom Friedman, “A.I. Nightmare and What the U.S. Can Do to Avoid It. ”The Opinions Podcast “,
3 Se
p. 2025. (25 minutes)…Watch: https://www.youtube.com/watch?v=j0ZK_OeF5Qg
- Matt Sheehan, “China’s Views on AI Safety are Changing – Quickly” Carnegie Endowment for
International Peace, Aug 2024 (20 minutes)…Read: www.carnegieendowment.org/research/2024/08/
china-artificial-intelligence-ai-safety-regulation?lang=en
- CEDPRO AI “Introduction to the EU AI Pact,” Nov. 2024. (10 minutes)…
Read: www.cedpo.eu/wp-content/uploads/EarlyComplianceProactiveCommitmentsAI-Pact.pdf
- Project Save the World forum, Episode How to Regulate AI? (video (60 min.)…(coming soon..)
Wed. Nov. 26: Recommendations (Final Plenary) h3. Deliberate Together:
A. In this session, the participants will reconsider the recommendations that have been suggested in the previous four sessions; select the ones to include in the final document; and add other advice about future questions that may arise.
B. They will select a committee to polish and publicize this new Statement of Recommendations.
How to Apply to Participate in this Inquiry
You need not be an expert to join these conversations. The more diverse a group is, the more likely it is to reach good decisions. (Yes, there’s research to prove that’s so.)
Registration in the Inquiry is free of charge for paid subscribers to Project Save the World’s Substack.
Participants need to speak and read English passably well. (Sorry, but we can’t provide translations.)We welcome everyone to visit our numerous posts on Substack by clicking here:
https://projectsavetheworld.substack.com
To apply for registration as a participant in the Inquiry, click here.
If you are not already a paying subscriber to Project Save the World’s Substack, you can become one
by clicking the link provided on that application form.
Please submit your application for the Inquiry by October 20. We will reply to your application by
October 25, providing links and further details about the Inquiry. If you don’t receive a reply by
then, feel free to email us at mspencer@web.net.