We are seeking both experienced and first-time Research Scientists to develop and execute on a safety research agenda and/or accelerate our existing projects.
FAR.AI is seeking applications from both experienced and first-time Research Scientists to develop and execute on a safety research agenda and/or accelerate our existing projects.
FAR.AI is a technical AI research non-profit, focused on ensuring the safe development and deployment of frontier AI technologies.
Since starting in July 2022, FAR has grown to 12 FTE, produced 13 academic papers, hosted events for some of the world's leading AI & computer science researchers, and opened our AI safety focused co-working space which is home to around 40 members.
Our research team likes to move fast. We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Unlike other AI safety labs that take a bet on a single research direction, FAR aims to pursue a diverse portfolio of projects. We also put our research into practice through red-teaming engagements with frontier AI developers.
Our current focus areas are building a science of robustness (e.g. finding vulnerabilities in superhuman Go AIs), finding more effective approaches to value alignment (e.g. training from language feedback), and model evaluation (e.g. inverse scaling and codebook features).
To build a flourishing field of AI safety research, we host targeted workshops and events, and operate a co-working space in Berkeley, called FAR.Labs. Our previous events include the International Dialogue for AI Safety that brought together prominent scientists (including 2 Turing Award winners) from around the globe, culminating in a public statement calling for global action on AI safety research and governance. We recently hosted the Bay Area Alignment Workshop for over 140 researchers from academia and industry to learn about AI safety and find collaborators. For more information on FAR.AI’s activities, please visit our latest post.
We are seeking applications from potential Research Scientists who can:
We are excited by unconventional backgrounds.
You may have the following:
As Research Scientist you would lead AI safety research projects or make essential contributions to existing projects. Examples of ongoing projects at FAR include:
You could be an employee or an independent Contractor for FAR AI, a 501(c)(3) research non-profit.
Please apply! If you have any questions about the role, please do get in touch at talent@far.ai.