Casey Newton:

At the end of last month, I attended an inaugural conference in Berkeley named the Curve. The idea was to bring together engineers at big tech companies, independent safety researchers, academics, nonprofit leaders, and people who have worked in government to discuss the biggest questions of the day in artificial intelligence:

Does AI pose an existential threat? How should we weigh the risks and benefits of open weights? When, if ever, should AI be regulated? How? Should AI development be slowed down or accelerated? Should AI be handled as an issue of national security? When should we expect AGI?

If the idea was to produce thoughtful collisions between e/accs and decels, the Curve came up a bit short: the conference was long on existential dread, and I don’t think I heard anyone say that AI development should speed up. 

If it felt a bit one-sided, though, I still found the conference to be highly useful. Aside from all the things I learned about the state of AI development and the various efforts to align it with human interests, my biggest takeaway is that there is an enormous disconnect between external critics of AI, who post about it on social networks and in their newsletters, and internal critics of AI — people who work on it directly, either for companies like OpenAI or Anthropic or researchers who study it.