Research Engineer
FAR.AI is seeking applications from research engineers looking to work on AI safety projects and red-teaming.
About Us
FAR.AI is a non-profit AI research institute focused on ensuring the safe development and deployment of frontier AI technologies.
Since starting in July 2022, FAR.AI has grown to 19 FTE, produced 28 academic papers, and established the leading AI safety events for research and international cooperation. Our work is recognized globally, with publications at leading venues such as NeurIPS, ICML and ICLR that have been featured in the Financial Times, Nature News and MIT Tech Review. We leverage our research insights to drive practical change through red-teaming with frontier model developers. Additionally, we help steer and grow the AI safety field through developing research roadmaps with renowned researchers such as Yoshua Bengio; running an AI safety focused co-working space FAR.Labs with 40 members; and through targeted grants to technical researchers.
About FAR.Research
Our research team likes to move fast. We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Unlike other AI safety labs that take a bet on a single research direction, FAR.AI aims to pursue a diverse portfolio of projects.
Our current focus areas include:
- building a science of robustness (e.g. finding vulnerabilities in superhuman Go AIs)
- finding more effective approaches to value alignment (e.g. training from language feedback)
- Advancing model evaluation techniques (e.g. inverse scaling and codebook features, and learned planning).
We also put our research into practice through red-teaming engagements with frontier AI developers, and collaborations with government institutes.
Other FAR Projects
To build a flourishing field of AI safety research, we host targeted workshops and events, and operate a co-working space in Berkeley, called FAR.Labs. Our previous events include the International Dialogue for AI Safety that brought together prominent scientists (including 2 Turing Award winners) from around the globe, culminating in a public statement calling for global action on AI safety research and governance. We also host the semiannual Alignment Workshop with 150 researchers from academia, industry and government to learn about the latest developments in AI safety and find collaborators. For more information on FAR.AI’s activities, please visit our recent post.
About the Role
You will collaborate closely with research advisers and research scientists inside and outside of FAR. As a research engineer, you will develop scalable implementations of machine learning algorithms and use them to run scientific experiments. You will be involved in the write-up of results and credited as an author in submissions to peer-reviewed venues (e.g. NeurIPS, ICLR, JMLR).
While each of our projects is unique, your role will generally have:
- Flexibility. You will focus on research engineering but contribute to all aspects of the research project. We expect everyone on the project to help shape the research direction, analyse experimental results, and participate in the write-up of results.
- Variety. You will work on a project that uses a range of technical approaches to solve a problem. You will also have the opportunity to contribute to different research agendas and projects over time.
- Collaboration. You will be regularly working with our collaborators from different academic labs and research institutions.
- Mentorship. You will develop your research taste through regular project meetings and develop your programming style through code reviews.
- Autonomy. You will be highly self-directed. To succeed in the role, you will likely need to spend part of your time studying machine learning and developing your high-level views on AI safety research.
About You
This role would be a good fit for someone looking to gain hands-on experience with machine learning engineering while testing their personal fit for AI safety research. We imagine interested applicants might be looking to grow an existing portfolio of machine learning research or looking to transition to AI safety research from a software engineering background.
It is essential that you:
- Have significant software engineering experience or experience applying machine learning methods. Evidence of this may include prior work experience, open-source contributions, or academic publications.
- Have experience with at least one object-oriented programming language (preferably Python).
- Are results-oriented and motivated by impactful research.
It is preferable that you have experience with some of the following:
- Common ML frameworks like PyTorch or TensorFlow.
- Natural language processing or reinforcement learning.
- Operating system internals and distributed systems.
- Publications or open-source software contributions.
- Basic linear algebra, calculus, vector probability, and statistics.
About the Projects
As a Research Engineer you would lead collaborations and contribute to many projects, with examples below:
- Scaling laws for prompt injections. Will advances in capabilities from increasing model and data scale help resolve prompt injections or “jailbreaks” in language models, or is progress in average-case performance orthogonal to worst-case robustness?
- Robustness of advanced AI systems. Explore adversarial training, architectural improvements and other changes to deep learning systems to improve their robustness. We are exploring this both in zero-sum board games and language models.
- Mechanistic interpretability for mesa-optimization. Develop techniques to identify internal planning in models to effectively audit the “goals” of models in addition to their external behavior.
- Redteaming of frontier models. Apply our research insights to test for vulnerabilities and limitations of frontier AI models prior to deployment.
Logistics
You will be an employee of FAR AI, a 501(c)(3) research non-profit.
- Location: Both remote and in-person (Berkeley, CA) are possible. We sponsor visas for in-person employees, and can also hire remotely in most countries.
- Hours: Full-time (40 hours/week).
- Compensation: $80,000-$175,000/year depending on experience and location. We will also pay for work-related travel and equipment expenses. We offer catered lunch and dinner at our offices in Berkeley.
- Application process: A 72-minute programming assessment, a short screening call, two 1-hour interviews, and a 1-2 week paid work trial. If you are not available for a work trial we may be able to find alternative ways of testing your fit.
Please express interest in this role! If you have any questions about the role, please do get in touch at talent@far.ai.