Are you looking for the next position you take to combine true impact whilst utilising the latest Language generation technology?
Join this tech-for-good start-up as an ML Research Engineer, building the first foundation model in their domain, solving an already large and growing challenge across the world.
Alongside their foundation model and first-of-its-kind dataset, which is their B2B market, they also have a B2C/Consumer app, which currently has no competitors (yet). So far their solution has been well received by those in the industry and those who are looking to access this type of technology to help improve their lives and wellbeing.
You’ll be pre-training (including continued pre-training) and fine-tuning LLMs, large ones too; up to 70 billion parameter scale models. Your previous experience must include at the very least fine-tuning LLMs, but if you have already done pre-training that would be desirable. The usual Python and PyTorch experience is expected, with Deep Learning experience in the language domain.
The role is made up of around 40% research and experimentation, 50% ML engineering including model building, training, fine-tuning, and pre-training and the remaining 10% will be focused on deployments and inference challenges.
Your research work will be on SOTA Deep Learning, LLMs / Transformers and novel approaches in reinforcement learning for alignment challenges. Multi-GPU training will be familiar to you when it comes to model training processes.
This start-up is moving fast, if you have experience in start-ups or start-up-style environments, you’ll fit right in.
This is a hybrid role, in NYC for 2-3 days per week. Relocation support can be provided if you already have the right to work in the US.
The total compensation will be somewhere in the region of $400,000-$750,000, dependant on experience (we are considering levels senior, staff, principal and lead). Which will involve a mixture of base salary and stock options along with the usual benefits you’d expect.