About Us:
We are establishing the first distributed Al infrastructure dedicated to personalized Al. The evolving needs of a data-driven society are demanding scalability and flexibility. We believe that the future of Al is distributed and enables real-time data processing at the edge, closer to where data is generated. We are building a future where a company's data and IP remains private and it's possible to bring large models directly to consumer hardware without removing information from the model.
Role Overview:
As a Staff R&D AI Engineer, you will lead the development of cutting-edge AI systems that bridge computer vision, natural language understanding, and action learning. You'll architect and implement Vision-Language-Action (VLA) models, advance reinforcement learning applications, and push the boundaries of multimodal AI integration. This role combines deep expertise in both computer vision and large language models with hands-on experience in reinforcement learning to create intelligent systems that can understand, reason about, and interact with complex environments. You'll drive research initiatives, mentor technical teams, and translate breakthrough AI research into practical applications across diverse domains.
Key Responsibilities:
Design and develop Vision-Language-Action (VLA) models that integrate visual perception, natural language understanding, and action prediction
Architect and implement reinforcement learning systems for sequential decision-making, including policy learning and skill acquisition
Build and optimize computer vision pipelines for perception tasks, including object detection, segmentation, tracking, and scene understanding
Develop and fine-tune large language models for instruction following, reasoning, and task planning applications
Implement RLHF (Reinforcement Learning from Human Feedback) systems to improve model alignment and safety
Create multimodal training pipelines that leverage synthetic and real-world data for robust model performance
Research and prototype novel AI architectures that combine vision, language, and action learning
Collaborate with engineering teams to integrate AI models into applications and validate performance across domains
Optimize model inference performance for real-time applications across edge and cloud deployments
Lead technical initiatives, mentor junior AI engineers, and establish best practices for AI model development
Stay current with latest research in VLA models, multimodal AI, and robotics to drive innovation roadmap
Present findings at conferences and publish research to advance the field
Qualifications & Skills:
7+ years of experience in AI/ML engineering with 4+ years focusing on deep learning and neural network development
Strong understanding of reinforcement learning algorithms and their applications (PPO, SAC, TD3, etc.)
Strong expertise in both computer vision and natural language processing with hands-on model development experience
Proficiency in PyTorch and/or TensorFlow with experience training and deploying large-scale models
Experience with transformer architectures, attention mechanisms, and large language model fine-tuning
Hands-on experience with computer vision tasks including object detection, semantic segmentation, and visual tracking
Strong programming skills in Python with experience in distributed training and model optimization
Understanding of sequential decision-making and control systems fundamentals
Experience with MLOps practices including model versioning, monitoring, and deployment pipelines
Proven ability to work independently on complex research problems and deliver practical solutions
Strong communication skills and experience collaborating with cross-functional engineering teams
Preferred Qualifications:
PhD in Computer Science, Robotics, AI/ML, or related field with focus on multimodal learning or robotics
Direct experience developing or working with Vision-Language-Action (VLA) models or similar multimodal architectures
Experience with RLHF implementation and human feedback integration for model alignment
Background in imitation learning, inverse reinforcement learning, or learning from demonstrations
Experience with real-world system deployment and sim-to-real transfer techniques
Knowledge of 3D computer vision, spatial reasoning, or multi-modal perception systems
Experience with distributed training frameworks (DeepSpeed, FairScale, Horovod) and large-scale model training
Familiarity with edge AI deployment and model optimization techniques (quantization, pruning, distillation)
Experience with embodied AI research or projects involving agent-environment interaction
Published research in top-tier AI/ML conferences (NeurIPS, ICML, ICLR, CoRL, etc.)
Open-source contributions to major AI/ML frameworks or robotics projects
Startup experience with ability to rapidly prototype and iterate on AI solutions
Experience with cloud platforms (AWS, GCP, Azure) and containerization technologies
Background in safety-critical AI systems or AI alignment research
We at webAI are committed to living out the core values we have put in place as the foundation on which we operate as a team. We seek individuals who exemplify the following:
Truth - Emphasizing transparency and honesty in every interaction and decision.
Ownership - Taking full responsibility for one’s actions and decisions, demonstrating commitment to the success of our clients.
Tenacity - Persisting in the face of challenges and setbacks, continually striving for excellence and improvement.
Humility - Maintaining a respectful and learning-oriented mindset, acknowledging the strengths and contributions of others.
Benefits:
Competitive salary and performance-based incentives.
Comprehensive health, dental, and vision benefits package.
$200/mos Health and Wellness Stipend
$400/year Continuing Education Credit
Flexible work week
Free parking, for in-office employees
Unlimited PTO
Parental, Bereavement Leave
Supplemental Life Insurance
webAI is an Equal Opportunity Employer and does not discriminate against any employee or applicant on the basis of age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We adhere to these principles in all aspects of employment, including recruitment, hiring, training, compensation, promotion, benefits, social and recreational programs, and discipline. In addition, it is the policy of webAI to provide reasonable accommodation to qualified employees who have protected disabilities to the extent required by applicable laws, regulations and ordinances where a particular employee works.