Research Engineer
FAR AI is seeking expressions of interest from research engineers looking to contribute to AI safety projects in reinforcement learning, natural language processing, and adversarial robustness.
About Us
FAR AI is a research organization that incubates and accelerates neglected but high-potential AI safety research agendas.
FAR AI’s mission is to ensure AI systems are trustworthy and beneficial to society. Broadly speaking, our research is focused on how AI can learn about humans’ goals and then robustly help humans achieve them. Our expertise is in reinforcement learning (RL), one of the most general frameworks for building transformative AI. In our prior work, we have developed ways to measure the goals of RL agents and teach a single agent to achieve many different goals. We have also identified a new threat model for attacking RL systems, vulnerabilities in narrowly superhuman systems and developed a method to make RL systems more robust.
About the Role
You will collaborate closely with research advisers and research scientists inside and outside of FAR. As a research engineer, you will develop scalable implementations of machine learning algorithms and use them to run scientific experiments. You will be involved in the write-up of results and credited as an author in submissions to peer-reviewed venues (e.g. NeurIPS, ICLR, JMLR).
While each of our projects is unique, your role will generally have:
- Flexibility. You will focus on research engineering but contribute to all aspects of the research project. We expect everyone on the project to help shape the research direction, analyse experimental results, and participate in the write-up of results.
- Variety. You will work on a project that uses a range of technical approaches to solve a problem. You will also have the opportunity to contribute to different research agendas and projects over time.
- Collaboration. You will be regularly working with our collaborators from different academic labs and research institutions.
- Mentorship. You will develop your research taste through regular project meetings and develop your programming style through code reviews.
- Autonomy. You will be highly self-directed. To succeed in the role, you will likely need to spend part of your time studying machine learning and developing your high-level views on AI safety research.
About You
This role would be a good fit for someone looking to gain hands-on experience with machine learning engineering while testing their personal fit for AI safety research. We imagine interested applicants might be looking to grow an existing portfolio of machine learning research or looking to transition to AI safety research from a software engineering background.
It is essential that you:
- Have significant software engineering experience or experience applying machine learning methods. Evidence of this may include prior work experience, open-source contributions, or academic publications.
- Have experience with at least one object-oriented programming language (preferably Python).
- Are results-oriented and motivated by impactful research.
It is preferable that you have experience with some of the following:
- Common ML frameworks like PyTorch or TensorFlow.
- Natural language processing or reinforcement learning.
- Operating system internals and distributed systems.
- Publications or open-source software contributions.
- Basic linear algebra, calculus, vector probability, and statistics.
About the Projects
As a research engineer, you would collaborate on a project about one of the following topics:
- Reward and imitation learning. Developing a reliable set of baseline implementations for algorithms that can learn from human feedback. Extensions may include developing standardised benchmark environments, datasets and evaluation procedures.
- RL from human feedback. Developing extensions to deep RL from human preferences and evaluating them in continuous control environments.
- Natural language processing. Integrating large language models with reinforcement learning in order to better understand human intentions. Creating text datasets and using them to fine-tune large language models.
- Adversarial robustness. Applying reinforcement learning techniques to test for vulnerabilities in narrowly superhuman systems such as KataGo.
Logistics
You will be an employee of FAR AI, a 501(c)(3) research non-profit.
- Location: Both remote and in-person (Berkeley, CA) are possible.
- Hours: Full-time (40 hours/week).
- Compensation: $80,000-$175,000/year depending on experience and location. We will also pay for work-related travel and equipment expenses. We offer catered lunch and dinner at our offices in Berkeley.
- Application process: A 90-minute programming assessment, 2 1-hour interviews, and a 1-2 week paid work trial. If you are not available for a work trial we may be able to find alternative ways of testing your fit.
Please apply! If you have any questions about the role, please do get in touch at hello@far.ai.