Research Engineer (Expression of Interest)
FAR.AI is seeking expressions of interest from research engineers looking to work on AI safety projects and red-teaming. We are currently looking to keep our team small and nimble so are only interviewing candidates who are an exceptional fit for this role. We will be looking to grow more aggressively later in the year once we have onboarded additional management capacity, so we encourage individuals interested in this role to express interest for a current or future role.
About Us
FAR.AI is a technical AI research non-profit, focused on ensuring the safe development and deployment of frontier AI technologies.
Since starting in July 2022, FAR has grown to 12 FTE, produced 13 academic papers, hosted events for some of the world’s leading AI & computer science researchers, and opened our AI safety focused co-working space which is home to around 40 members.
About FAR.Research
Our research team likes to move fast. We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Unlike other AI safety labs that take a bet on a single research direction, FAR aims to pursue a diverse portfolio of projects. We also put our research into practice through red-teaming engagements with frontier AI developers.
Our current focus areas are building a science of robustness (e.g. finding vulnerabilities in superhuman Go AIs), finding more effective approaches to value alignment (e.g. training from language feedback), and model evaluation (e.g. inverse scaling and codebook features).
Other FAR Projects
To build a flourishing field of AI safety research, we host targeted workshops and events, and operate a co-working space in Berkeley, called FAR.Labs. Our previous events include the International Dialogue for AI Safety that brought together prominent scientists (including 2 Turing Award winners) from around the globe, culminating in a public statement calling for global action on AI safety research and governance. We recently hosted the New Orleans Alignment Workshop for over 140 researchers from academia and industry to learn about AI safety and find collaborators. For more information on FAR.AI’s activities, please visit our latest post.
About the Role
You will collaborate closely with research advisers and research scientists inside and outside of FAR. As a research engineer, you will develop scalable implementations of machine learning algorithms and use them to run scientific experiments. You will be involved in the write-up of results and credited as an author in submissions to peer-reviewed venues (e.g. NeurIPS, ICLR, JMLR).
While each of our projects is unique, your role will generally have:
- Flexibility. You will focus on research engineering but contribute to all aspects of the research project. We expect everyone on the project to help shape the research direction, analyse experimental results, and participate in the write-up of results.
- Variety. You will work on a project that uses a range of technical approaches to solve a problem. You will also have the opportunity to contribute to different research agendas and projects over time.
- Collaboration. You will be regularly working with our collaborators from different academic labs and research institutions.
- Mentorship. You will develop your research taste through regular project meetings and develop your programming style through code reviews.
- Autonomy. You will be highly self-directed. To succeed in the role, you will likely need to spend part of your time studying machine learning and developing your high-level views on AI safety research.
About You
This role would be a good fit for someone looking to gain hands-on experience with machine learning engineering while testing their personal fit for AI safety research. We imagine interested applicants might be looking to grow an existing portfolio of machine learning research or looking to transition to AI safety research from a software engineering background.
It is essential that you:
- Have significant software engineering experience or experience applying machine learning methods. Evidence of this may include prior work experience, open-source contributions, or academic publications.
- Have experience with at least one object-oriented programming language (preferably Python).
- Are results-oriented and motivated by impactful research.
It is preferable that you have experience with some of the following:
- Common ML frameworks like PyTorch or TensorFlow.
- Natural language processing or reinforcement learning.
- Operating system internals and distributed systems.
- Publications or open-source software contributions.
- Basic linear algebra, calculus, vector probability, and statistics.
About the Projects
As a Research Engineer you would lead collaborations and contribute to many projects, with examples below:
- Scaling laws for prompt injections. Will advances in capabilities from increasing model and data scale help resolve prompt injections or “jailbreaks” in language models, or is progress in average-case performance orthogonal to worst-case robustness?
- Robustness of advanced AI systems. Explore adversarial training, architectural improvements and other changes to deep learning systems to improve their robustness. We are exploring this both in zero-sum board games and language models.
- Mechanistic interpretability for mesa-optimization. Develop techniques to identify internal planning in models to effectively audit the “goals” of models in addition to their external behavior.
- Redteaming of frontier models. Apply our research insights to test for vulnerabilities and limitations of frontier AI models prior to deployment.
Logistics
You will be an employee of FAR AI, a 501(c)(3) research non-profit.
- Location: Both remote and in-person (Berkeley, CA) are possible. We sponsor visas for in-person employees, and can also hire remotely in most countries.
- Hours: Full-time (40 hours/week).
- Compensation: $80,000-$175,000/year depending on experience and location. We will also pay for work-related travel and equipment expenses. We offer catered lunch and dinner at our offices in Berkeley.
- Application process: A 90-minute programming assessment, 2 1-hour interviews, and a 1-2 week paid work trial. If you are not available for a work trial we may be able to find alternative ways of testing your fit.
Please express interest in this role! If you have any questions about the role, please do get in touch at talent@far.ai.