Locations
- Bulgaria
- Georgia
- Lithuania
- Mexico
- Moldova
- Poland
- Romania
Company Background
The client is a global leader in access solutions, known for its innovative products in the Smart Home and IoT space. Their portfolio includes smart garage door openers, locks, card readers, gates, and cameras, all integrated within a proprietary digital ecosystem. With a vast install base worldwide, the client is at the forefront of smart access technology, continuously evolving through AI and cloud innovation.
Project Description
The current initiative focuses on strengthening the AI infrastructure supporting object detection and generative AI models. Key areas include enhancing model validation scripts, automating model testing on edge devices, and modernizing data curation processes. The team aims to deliver high-performance, scalable solutions that ensure consistent model accuracy across diverse hardware platforms and improve real-time smart notifications. Emphasis is placed on system automation, AI model deployment, and the seamless transition from traditional object detection to generative AI-based features.
Technologies
- AWS
- Azure
- Databricks
- Apache Spark
- ML frameworks (TensorFlow, PyTorch)
- .NET Core
- Python
- NumPy
- Pandas
- Scikit-learn
- LangChain
What You'll Do
- Design, develop, and validate scalable AI platforms for edge and cloud deployment
- Automate testing and benchmarking pipelines for AI models across various hardware environments
- Build and maintain validation scripts for object detection and GenAI models
- Implement CI/CD pipelines for ML workflows and infrastructure
- Investigate hardware-related model performance discrepancies and optimize for quantization
- Collaborate with firmware and data teams to improve model reliability and data labeling workflows
- Enhance data annotation quality and automate dataset management for vision and GenAI use cases
Job Requirements
- 5+ years of experience in building and maintaining ETL/ELT pipelines, data lakes, and warehousing solutions
- Proficient in distributed data processing with Apache Spark; Databricks experience preferred
- Strong production-level Python skills and familiarity with API development
- Hands-on experience with AWS and/or Azure
- Knowledge of ML frameworks (e.g., TensorFlow, PyTorch), model deployment, quantization, and GenAI tools like LangChain
- Experience with MLOps practices, CI/CD pipelines, and model validation techniques
- Solid understanding of computer vision tasks, dataset labeling, and edge-device ML deployment
- English level: B1+ (written and spoken)
What Do We Offer
The global benefits package includes:
- Technical and non-technical training for professional and personal growth;
- Internal conferences and meetups to learn from industry experts;
- Support and mentorship from an experienced employee to help you professional grow and development;
- Internal startup incubator;
- Health insurance;
- English courses;
- Sports activities to promote a healthy lifestyle;
- Flexible work options, including remote and hybrid opportunities;
- Referral program for bringing in new talent;
- Work anniversary program and additional vacation days.
Didn't find anything suitable?
We're always starting new projects and we'd love to work with you. Please send your CV and we'll get in touch.
An error occurred sending your message.
Try again or contact us via webinforequest@coherentsolutions.com.
Thanks for your application!
We will reply soon.
Other open vacancies: Data & Analytics
Data Engineer
-
-
Countries
8BulgariaGeorgiaLithuaniaMexicoMoldovaPolandRomaniaUkraine
- BulgariaGeorgiaLithuaniaMexicoMoldovaPolandRomaniaUkraine
Data Engineer (Databricks)
-
-
Countries
8BulgariaGeorgiaLithuaniaMexicoMoldovaPolandRomaniaUkraine
- BulgariaGeorgiaLithuaniaMexicoMoldovaPolandRomaniaUkraine
Data Engineer (Hospitality sphere)
-
-
Countries
8BulgariaGeorgiaLithuaniaMexicoMoldovaPolandRomaniaUkraine
- BulgariaGeorgiaLithuaniaMexicoMoldovaPolandRomaniaUkraine