Hello!
Some highlights this month:
- You can apply for EAGxVirtual 2023, a free online event for anyone interested in learning more about effective altruism and connecting with the global EA network (17-19 November). Applications close 16 November.
- Kevin Esvelt discusses interventions that can prepare society for biological attacks, how CRISPR-based gene drives could fight disease or improve animal welfare, and more.
- A researcher from the Centre for the Governance of AI asks: “What Do We Mean When We Talk About “AI Democratisation”?”
Also: the case for treating malaria vaccination like an emergency, a TED talk on becoming the first sustainable generation, and many new roles.
— Lizka, for the EA Newsletter Team
|
|
Biotechnology is making it easier to engineer pandemic-capable viruses; Kevin Esvelt on responsible biotech and improving society's defenses
The day after Kevin Esvelt discovered a way to spread a modified gene through an entire population of plants or animals, he “woke up in a cold sweat.” He’d realized that the technology (CRISPR-based gene drive) was extremely promising — some now believe that it could be used to eradicate malaria by modifying the mosquitoes that transmit the disease, prevent painful screwworm infestations, and much more — but he was worried that someone would misuse it. So Esvelt did something very unusual. He didn’t tell anyone, including his then-advisor, until he was confident that the technology’s risks were manageable.
In a recent appearance on the 80,000 Hours Podcast, Esvelt explains why he now thinks CRISPR-based gene drive technology is relatively safe (it’s slow, clearly detectable, and easily countered). Accidents or carelessness could still lead to catastrophic delays and other issues, so Esvelt argues that we should only start small and local (via “ daisy drives”) and only proceed with proper regulation and community buy-in.
The podcast episode also covers why and how we should protect society from those who might want to start a civilization-ending pandemic. The number of people who have the ability to identify and release a dangerous virus is growing due to progress in biotechnology, AI, and misguided research. But some things could help. For instance:
- Monitoring wastewater for suspicious patterns or signs of edited DNA to detect pandemics early
- Investing in developing better personal protective equipment (and stockpiling it)
- Installing virus-neutralizing far-UVC lights in workplaces and labs
If you’re interested in getting involved, consider donating to promising biosecurity projects and funds or applying to roles on Esvelt’s SecureBio team and Open Philanthropy’s Global Catastrophic Risks team.
What does democratic oversight of AI look like?
Recent polls suggest that most U.S. voters would approve of regulation that prevents or slows down AI superintelligence, favor restricting the release of AI models we don’t fully understand, and prefer federal regulation of AI development over self-regulation by tech companies. It’s hard to accurately interpret poll results like these, but they point to unease and a disconnect between the general public and some major AI labs in the U.S. So it’s worth exploring how the public should be involved in steering AI development.
People sometimes talk about “AI democratization,” but the phrase is vague. An overview from the Centre for the Governance of AI outlines different things people mean by “AI democratization” and how the stances diverge. The post (and accompanying paper) explains that democratizing AI use (making AI more accessible), development (helping a wider range of people contribute to AI design), benefits (more equitably distributing the benefits of AI), and governance (distributing influence over how AI should be used, developed, and shared to a wider community of stakeholders) have significant differences. Democratizing AI governance doesn’t necessarily mean making it possible for everyone to use or build AI models however they want, but rather introducing democratic processes like citizen assemblies to give people input on key decisions about AI. Open-sourcing powerful AI models (making models accessible to everyone with virtually no restrictions), for instance, is often described as a democratic move, but it’s a choice made by company executives that could be extremely risky for society and that, according to the polls linked above, the public might oppose.
Consulting the public on AI development was also discussed in a recent podcast episode with Tantum Collins, where he covered what he’s learned as an AI policy insider at the White House, DeepMind, and elsewhere.
If you’d like to get involved, consider donating or exploring AI safety opportunities.
Animals in data
|
|
In other news
For more stories, try these email newsletters and podcasts.
|
|
Resources
Links we share every time — they're just that good!
|
|
Jobs
Boards and resources:
- The 80,000 Hours Job Board features more than 700 positions. We can’t fit them all in the newsletter, so you can check them out there.
- The EA Opportunity Board collects internships, volunteer opportunities, conferences, and more — including part-time and entry-level job opportunities.
- Probably Good maintains a list of impact-focused job boards.
Featured jobs:
Anthropic
BlueDot Impact
Centre for Effective Altruism
EA Funds (Long-Term Future Fund)
- Fund Chair (Berkeley/remote, $120k - $240k, apply by 23 October)
Giving Green
Non-Trivial
Open Philanthropy
Our World in Data
The Good Food Institute
Tiny Foundation
|
|
Announcements
Applications are open for EAGxVirtual
EAGxVirtual 2023 is an online event that will run from 17-19 November 2023. Attendance is free, and anyone interested in learning more about effective altruism and connecting with the global EA network is welcome to apply — attendees are not expected to have prior EA engagement.
Apply by 16 November.
Fellowships, courses, and other events
- The 2024 Tarbell Fellowship provides aspiring journalists with a stipend of up to $50,000, a placement at a major outlet, and training in AI fundamentals. The Tarbell Fellowship launched in 2023, selecting 7 fellows from a competitive pool of over 950 applications. 3 fellows are currently at TIME, and another is freelancing with the New Yorker. Apply by 5 November.
- EA Virtual Programs are 8-week, part-time, virtual, and free opportunities to engage intensively with ideas related to effective altruism. Apply for the next round by 22 October.
- The AI Safety Fundamentals Alignment Course is a 12-week (2-4 hours per week), free, virtual program that helps participants learn about AI alignment and extreme risks posed by misaligned AI. It is aimed at people interested in exploring AI safety as a career path who have a technical background or who are early in their career. Apply for the early 2024 program.
- The Open Prediction Tournament for Intercollegiate Competition is a 1-day forecasting tournament for undergraduates. Upcoming tournaments will be held in the Bay Area/SF (4 November), DC (18 November), and Boston (2 December).
|
|
Organizational Updates
You can see updates from a wide range of organizations on the EA Forum.
|
|
Classic: Don’t think, just apply! (usually)
There are hundreds of roles on the 80,000 Hours Job board, but the prospect of applying can be daunting.
One classic post suggests that you shouldn’t spend too long thinking about the pros and cons of applying to an opportunity; if it’s worth thinking hard about, you should probably just apply instead.
|
|
We hope you found this edition useful!
If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.
Finally, if you have feedback for us, positive or negative, let us know!
– The Effective Altruism Newsletter Team
|
|
|
|
|