AI Engineer
Overview
At Parallel, we build AI agents to help healthcare facilities automate their administrative processes, starting with medical coding. Today, up to 25% of healthcare spending is lost in manual, repetitive administrative work. Legacy software has made things slower, more complex, and more frustrating for care teams. Our technology combines Large Language Models with system-level automation to handle real work inside existing hospital tools. No integrations, no disruption. Our first agent automates end-to-end coding workflows and frees up time for medical staff to focus on patients. We launched in 2024, are backed by top investors, and we’re just getting started. The future of healthcare isn’t just digital — it’s automated. Come help us build it.
Started in Summer 2024; small team from top-tier companies; Raised 3.5M$ with Frst & Y Combinator; Already working with leading Hospitals and Clinics.
Role
As a Founding AI Engineer , you will help shape the core AI architecture, define our machine learning strategy, and embed intelligent systems directly into hospitals. This is a rare opportunity to build with high ownership, alongside a small and passionate team who cares deeply about craft, clarity, and impact.
You’ll work closely with the CTO and founding team on everything from agentic LLM workflows to data pipelines and AI-first product design. If you’re excited by deploying real-world AI that improves patient care and reduces hospital burnout, we’d love to talk.
This is a full-time role focused on designing and deploying production-ready AI systems in real-world clinical environments.
Mission
As a Founding AI Engineer, you will:
- Design and implement LLM-powered systems for automating medical coding
- Build and iterate on agent-based workflows tailored to complex clinical operations
- Collaborate on integrating AI outputs with user-facing apps used by doctors and hospital staff
- Work closely with hospital IT to build secure and scalable data ingestion pipelines (adhering to health data security standards)
- Partner with the CTO to define the AI roadmap and integrate state-of-the-art tools with robust backend infrastructure
- Own the full lifecycle of AI features: research, prototyping, evaluation, deployment, and monitoring
Qualifications
- 5+ years of experience working on applied ML/AI problems in production environments
- Strong experience with LLMs and NLP, including prompt engineering and/or fine-tuning
- Proficiency in Python or Node.js and experience with ML libraries (e.g. Hugging Face, LangChain, PyTorch, etc.)
- Familiarity with backend services (Node.js/TypeScript) and data infrastructure is a strong plus
- A deep sense of ownership and ability to move from prototype to product quickly
- Experience working with sensitive or regulated data (healthcare, finance, etc.) is a bonus
Technical stack
- Backend: TypeScript with NestJS, Express, Prisma, Postgres
- Frontend: React, TanStack, Tailwind
- Data: Python & Node (for low level proxy servers)
- Tools: Monorepo, Github, Github actions
- Infra: AWS, Azure, Cloudflare, Docker, Terraform, Kubernetes
- CI/CD: Github, Github Actions, Monorepo setup
- Observability: Datadog
- AI/ML: Hugging Face, LangChain
Engineering Mindset
- Data security is our foundation, given our work with sensitive health data
- We focus on solving user problems, not shipping features — deep product involvement is essential
- Full type safety from database to UI
- Rapid development with tools like Cursor
- Automated best practices with eslint, Prettier, Jest, and TypeScript
- We leverage the latest technologies and libraries but sometimes, old boring tech that does the job is what’s needed
- Infrastructure should empower — not block — product iteration