We are seeking expressions of interest from research engineers looking to work on AI safety projects and red-teaming.
FAR.AI is seeking expressions of interest from research engineers looking to work on AI safety projects and red-teaming. We are currently looking to keep our team small and nimble so are only interviewing candidates who are an exceptional fit for this role. We will be looking to grow more aggressively later in the year once we have onboarded additional management capacity, so we encourage individuals interested in this role to express interest for a current or future role.
FAR.AI is a technical AI research non-profit, focused on ensuring the safe development and deployment of frontier AI technologies.
Since starting in July 2022, FAR has grown to 12 FTE, produced 13 academic papers, hosted events for some of the world's leading AI & computer science researchers, and opened our AI safety focused co-working space which is home to around 40 members.
Our research team likes to move fast. We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Unlike other AI safety labs that take a bet on a single research direction, FAR aims to pursue a diverse portfolio of projects. We also put our research into practice through red-teaming engagements with frontier AI developers.
Our current focus areas are building a science of robustness (e.g. finding vulnerabilities in superhuman Go AIs), finding more effective approaches to value alignment (e.g. training from language feedback), and model evaluation (e.g. inverse scaling and codebook features).
To build a flourishing field of AI safety research, we host targeted workshops and events, and operate a co-working space in Berkeley, called FAR.Labs. Our previous events include the International Dialogue for AI Safety that brought together prominent scientists (including 2 Turing Award winners) from around the globe, culminating in a public statement calling for global action on AI safety research and governance. We recently hosted the Bay Area Alignment Workshop for over 140 researchers from academia and industry to learn about AI safety and find collaborators. For more information on FAR.AI’s activities, please visit our latest post.
You will collaborate closely with research advisers and research scientists inside and outside of FAR. As a research engineer, you will develop scalable implementations of machine learning algorithms and use them to run scientific experiments. You will be involved in the write-up of results and credited as an author in submissions to peer-reviewed venues (e.g. NeurIPS, ICLR, JMLR).
While each of our projects is unique, your role will generally have:
This role would be a good fit for someone looking to gain hands-on experience with machine learning engineering while testing their personal fit for AI safety research. We imagine interested applicants might be looking to grow an existing portfolio of machine learning research or looking to transition to AI safety research from a software engineering background.
It is essential that you:
It is preferable that you have experience with some of the following:
As a Research Engineer you would lead collaborations and contribute to many projects, with examples below:
You will be an employee of FAR AI, a 501(c)(3) research non-profit.
Please express interest in this role! If you have any questions about the role, please do get in touch at talent@far.ai.