Head of Engineering
FAR.AI is seeking applications for a Head of Engineering to lead and scale our engineering team, driving the technical execution of our frontier AI safety research projects.
About Us
FAR.AI is a non-profit AI research institute focused on ensuring the safe development and deployment of frontier AI technologies.
Since starting in July 2022, FAR has grown to 17 FTE, produced 27 academic papers, and established the leading AI safety events for research and international cooperation. Our work is recognized globally, with publications at leading venues such as NeurIPS, ICML and ICLR that have been featured in the Financial Times, Nature News and MIT Tech Review. We leverage our research insights to drive practical change through red-teaming with frontier model developers. Additionally, we help steer and grow the AI safety field through developing research roadmaps with renowned researchers such as Yoshua Bengio; running an AI safety focused co-working space FAR.Labs with 40 members; and through targeted grants to technical researchers.
About FAR.Research
Our research team likes to move fast. We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Unlike other AI safety labs that take a bet on a single research direction, FAR aims to pursue a diverse portfolio of projects.
Our current focus areas include:
- building a science of robustness (e.g. finding vulnerabilities in superhuman Go AIs)
- finding more effective approaches to value alignment (e.g. training from language feedback)
- Advancing model evaluation techniques (e.g. inverse scaling and codebook features, and learned planning).
We also put our research into practice through red-teaming engagements with frontier AI developers, and collaborations with government institutes.
Other FAR Projects
To build a flourishing field of AI safety research, we host targeted workshops and events, operate an AI safety co-working space in Berkeley called FAR.Labs, and provide grants to academics working on priority research areas. Our previous events include the International Dialogue for AI Safety that brought together prominent scientists (including 2 Turing Award winners) from around the globe, culminating in a public statement calling for global action on AI safety research and governance. We also host the semiannual Alignment Workshop with 150 researchers from academia, industry and government to learn about the latest developments in AI safety and find collaborators. For more information on FAR.AI’s activities, please visit our latest summary post.
About the Role
As (the first) Head of Engineering at FAR, you will play a crucial role to build and oversee a team of Research Engineers, who are working to ensure the safe and responsible development of advanced artificial intelligence systems.
Key responsibilities:
- Team Leadership and Management. Lead and grow our team of Research Engineers from 3 to 6 FTE.
- Strategy. Contribute to the strategic planning of research projects, ensuring alignment with FAR’s goals and objectives.
- Project Management. Oversee the project lifecycle from start to finish, working closely with all members of our research teams (Advisors, Engineers, Scientists) inside and outside of FAR.
- Innovation. Oversee running scientific experiments and advise on the development of scalable implementations of machine learning algorithms. You will be credited as an author in submissions to peer reviewed venues (e.g. NeurIPS, ICLR, JMLR).
- Flexibility and Variety. Focus on engineering execution but contribute to all aspects of the research project. We expect everyone on the project to help shape the research direction, analyze experimental results, and participate in the write-up of results.
- Collaboration and Networking. External Engagement. Represent FAR.AI at conferences, workshops, and in collaborations with other research institutions and AI companies. You’ll have the opportunity to present our work and engage with the broader AI safety community.
- Talent Development. Nurture the professional growth of your team and collaborators through regular project meetings, mentorship, and constructive feedback.
- Learning and Development. Pursue continual development of your skills through internal and external resources. You will have an opportunity in the role to develop your research taste and high-level views on AI safety research by collaborating with our Research Scientists. We are excited to support our team to grow in other areas, and will have a dedicated Learning & Development budget.
About You
You should be excited to help make AI systems safe. You are expected to have a minimum of 2 years experience leading technical projects or teams, within software engineering or machine learning domains.
It is essential that you have a:
- Track record of leading or managing technical projects or teams, ideally in a startup or research environment
- Strong foundation in software engineering, with proficiency in at least one object-oriented programming language (preferably Python).
- Are results-oriented and motivated by impactful research.
- Excellent ability to communicate complex technical concepts and lead collaborative efforts.
It is preferable that you have experience with two or more of the following:
- Natural language processing, adversarial ML or reinforcement learning.
- Familiarity with machine learning methodologies and frameworks like PyTorch or TensorFlow.
- Operating system internals and distributed systems.
- Publications or open-source software contributions.
- Basic linear algebra, calculus, vector probability, and statistics.
About the Projects
As Head of Engineering you would lead collaborations and contribute to many projects, with examples below:
- Scaling laws for prompt injections. Will advances in capabilities from increasing model and data scale help resolve prompt injections or “jailbreaks” in language models, or is progress in average-case performance orthogonal to worst-case robustness?
- Robustness of advanced AI systems. Explore adversarial training, architectural improvements and other changes to deep learning systems to improve their robustness. We are exploring this both in zero-sum board games and language models.
- Mechanistic interpretability for mesa-optimization. Develop techniques to identify internal planning in models to effectively audit the “goals” of models in addition to their external behavior.
- Redteaming of frontier models. Apply our research insights to test for vulnerabilities and limitations of frontier AI models prior to deployment.
Logistics
You will be an employee of FAR AI, a 501(c)(3) research non-profit.
- Location: Both remote and in-person (Berkeley, CA) are possible. We sponsor visas for in-person employees, and can also hire remotely in most countries.
- Hours: Full-time (40 hours/week).
- Compensation: $175,000-$300,000/year* depending on experience and location. We will also pay for work-related travel and equipment expenses. We offer catered lunch and dinner at our offices in Berkeley. *For exceptional candidates with an outstanding track record we may be able to offer additional compensation.
- Application process: A 90-minute programming assessment, 2 1-hour interviews, and a 1-2 week paid work trial. If you are not available for a work trial we may be able to find alternative ways of testing your fit.
- Deadline: November 15th, 2024 – earlier applications preferred, we may close the round earlier if a suitable candidate is found.
Please apply! If you have any questions about the role, please do get in touch at talent@far.ai.