Product Policy Lead

Website Anthropic

Building Reliable, Interpretable, and Steerable AI Systems

As the Product Policy Lead, you will set the foundation for Anthropic’s approach to safe deployments. You will develop the policies that govern the use of our systems, oversee the technical approaches to identifying current and future risks, and build the organizational capacity to mitigate product safety risks at-scale. You will work collaboratively with our Product, Societal Impacts, Policy, Legal, and leadership teams to develop policies and processes that protect Anthropic and our partners.

You’re a great fit for the role if you’ve served in leadership positions in the fields of Trust & Safety, product policy, or risk management at fast-growing technology companies, and you recognize that emerging technology such as generative AI systems will require creative approaches to mitigating complex threats.

Please note that in this role you may encounter sensitive material and subject matter, including policy issues that may be offensive or upsetting.

Representative projects

  • Set the strategy and define the build-out of Anthropic’s approach to product policy. You will determine the policies for how our systems can be used, oversee the development of risk identification and monitoring functionality, and build out our Product Policy function, including policy analysts, engineers, data scientists, and operations analysts
  • Lead the development of Anthropic’s policies on how our systems can be used, from the identification and prioritization of needed policies, to research efforts ensuring those policies are informed by subject matter experts, to the testing and iteration of draft policies, to implementation
  • Build out and oversee the technical components of our product policy organization, including the engineers and data scientists who develop innovative methods for identifying and mitigating system abuse
  • Work collaboratively with the Product and Societal Impacts teams, as well as external partners, to deeply understand potential use cases for Anthropic systems and the requisite policies to govern them effectively
  • Own the end-to-end execution of product policy enforcement, including investigations of novel use cases and edge-case policy decisions, as well as the eventual buildout and scaling of a policy operations function
  • Communicate Anthropic’s policies externally and work collaboratively with other organizations to build strong community norms amongst AI developers

You might be a good fit if you:

  • Enjoy building programs from the ground up. You think holistically and can proactively identify the needs of an organization, making key hires or developing new programs as needed. You have demonstrated experience growing a dedicated function and scaling its impact.
  • Are an excellent communicator. You make ambiguous problems clear and identify core principles that can translate across scenarios. You advise leadership, internal teams, and customers on specific policy decisions, as well as industry trends more broadly.
  • Have strong people management skills. You’re an experienced manager with a track record for building high-functioning, cohesive teams. You recruit and mentor individual contributors and other managers across policy, technical, and operations teams.
  • Have a passion for making powerful technology safe and societally beneficial. You anticipate unforeseen risks, model out scenarios, and provide actionable guidance to internal stakeholders.
  • Thrive on collaboration and build trust with teams across the organization. You handle sensitive and high-stakes policy decisions with professionalism and diplomacy. You respectfully influence stakeholders to act on data and insights from your team.
  • Think creatively about the risks and benefits of new technologies, and think beyond past checklists and playbooks. You stay up-to-date and informed by taking an active interest in emerging research and industry trends.

 

For this role, we prefer candidates who are able to be in our office more than 25% of the time, though we encourage you to apply even if you don’t think you will be able to do that.

Applications will be reviewed on a rolling basis.

If interested, you can submit an application here.

How we’re different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

Come work with us! Anthropic is a public benefit corporation based in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.

Tagged as: product policy, Trust & Safety

To apply for this job please visit jobs.lever.co.