(Medior) Data Scientist 4.0 factory
Role details
Job location
Tech stack
Job description
Could your best career choice be one that propels you toward becoming an intrapreneur or entrepreneur in your field of expertise?
Success in industries often relies on key individuals demonstrating intrapreneurial leadership, i.e. who continuously launch new initiatives and seek for excellence.
The best way to develop this strength is through ad interim projects, i.e. work with established intrapreneurs in international companies to support their major expansion or improvement initiatives regarding governance, studies, or operations.
AETHER manages these opportunities in key strategic sectors, as these offer greater learning opportunities with multi-million impacts:
- Public Infrastructures
- Process Manufacturing
- Component & System Technologies
One challenge for component & system technologies operations is that failing to deploy and monitor Machine Learning models in production within a hybrid on-prem and cloud architecture can silently degrade industrial robustness and quality performance, until the manufacturing system no longer meets expected precision and reliability standards., Our service is to bring assistance with a Consultant, in the context of an industrial Manufacturing 4.0 team delivering data-driven tools and AI capabilities to support production and business teams, within a project-driven and agile operating model, and with deployments on hybrid on-prem and public cloud infrastructures., * Collaborate with the Manufacturing 4.0, IT, and business teams to support project delivery and alignment.
- Collaborate with IT teams to integrate models into existing systems and the CI/CD chain.
- Document the developed solutions and share best practices within the team.
- Explore, analyze, and prepare data from industrial systems (machines, sensors, MES, quality inspection, etc.).
- Design, train, and evaluate Machine Learning and/or Computer Vision models adapted to industrial constraints.
- Implement end-to-end MLOps pipelines (training, validation, deployment, monitoring).
- Ensure monitoring of model performance in production (quality, drift, robustness) and propose continuous improvements.
- Deploy models into production on a hybrid on-prem and cloud architecture, relying on containers and AWS SageMaker.
Requirements
Do you have experience in TensorFlow?, * Engineering degree (computer science, data, AI, applied mathematics) or equivalent.
- Confirmed experience (minimum 5 years) as a Data Scientist or Machine Learning Engineer, with significant exposure to industrial projects or production-critical environments.
- Experience designing, training, validating, and deploying Machine Learning models (predictive and/or Computer Vision) in production, and ensuring long-term follow-up.
- Strong command of MLOps best practices: data and model versioning, pipeline automation, CI/CD, performance monitoring, and model lifecycle management.
- Comfortable with hybrid architectures combining on-premise and public cloud; prior hands-on experience with AWS, specifically AWS SageMaker for training and deploying models.
- Ability to collaborate with multidisciplinary teams (IT, data, industrial business teams) and translate complex business problems into robust data solutions.
- Knowledge of industrial/manufacturing environments (OT) and data from sensors, machines, or MES is an advantage.
- Strong interpersonal skills, team mindset, and demonstrated autonomy, rigor, and initiative.
- No mandatory certifications; AWS certifications (Machine Learning, Data Analytics, Cloud Practitioner) or equivalent are a plus if backed by practical mastery of cloud and MLOps environments.
- Required knowledge: supervised and unsupervised ML (regression, classification, anomaly detection, time series).
- Required knowledge: practical Computer Vision (image processing, CNN, deep learning models) or industrial predictive models.
- Required tools: Python, Pandas, NumPy, Scikit-learn, PyTorch and/or TensorFlow.
- Required experience: production deployment of AI models (API, batch, streaming).
- Required MLOps knowledge: training and inference pipelines; versioning (code, data, models); monitoring performance and data/model drift.
- Required platforms: container-based architectures (Docker/Podman) and container orchestration (Kubernetes/OpenShift).
- Required cloud knowledge: public cloud environments, especially AWS (S3, SageMaker, IAM, ECR, CloudWatch).
- Required software practices: Git, testing, CI/CD.
- Appreciated additional experience: industrial data architectures (data lakes, real-time streaming).
- Appreciated additional experience: Snowflake (warehouse, data sharing, tasks/streams, cost/performance optimization).
- Appreciated additional experience: Dataiku.
- Appreciated additional experience: message brokers (Kafka, MQTT).
- Appreciated additional experience: observability and monitoring tools (Prometheus, Grafana, CloudWatch).
- Appreciated additional experience: managing large or heterogeneous data from industrial sensors (time series).
- Appreciated additional experience: cybersecurity notions applied to industrial and cloud environments.
- Language: fluent French (spoken and written); ability to read and write documentation in English.
- An entrepreneurial experience or ambition is a plus (extra-professional or extra-academic is a great start).
- Immediate availability is preferred.
- Valid single permit for non-EU citizens.