Data Scientist
Role details
Job location
Tech stack
Job description
- Design, build, and maintain scalable data pipelines that ingest, transform, and serve real estate data across our platform.
- Develop, train, and deploy AI/ML models, including price prediction, sustainability scoring, and market analytics.
- Work on production-grade data and model pipelines, ensuring reliability, monitoring, and performance.
- Collaborate closely with engineers and data scientists to bring models into real product features.
- Take ownership of the data stack end-to-end, from ingestion and storage to modeling, serving, and monitoring.
- Contribute to building a modern MLOps culture, including versioning, experiment tracking, model deployment, and monitoring.
Requirements
- 3 to 7 years of experience in data science, data engineering, or a combined AI/ML engineering role.
- Strong proficiency in Python and SQL.
- Experience developing and deploying ML models in production environments.
- Familiarity with modern data stack tooling (e.g. dbt, Airflow, Spark, Beam, or similar).
Bonus if you also bring:
- Experience with real-time or streaming data (Kafka, Flink, or similar).
- Familiarity with the real estate industry.
- Experience with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code tooling (e.g. Terraform, Pulumi).
- Familiarity with geospatial data.
- Experience with LLMs and modern AI tooling (LangChain, RAG pipelines, vector databases).
- Understanding of how model outputs translate into product features and business value.
- Experience in B2B, SaaS, or data-intensive product environments.
Benefits & conditions
- A salary between €3,500 and €6,000, depending on your experience and working hours.
- Available for 32+ hours.
- Hybrid working model, with 2 office days
- Learning and development budget.
- A supportive, engaged team with direct communication, a culture of ownership, and annual team adventures (last year we explored Marrakesh). We're committed to building a diverse team and welcome applications from all backgrounds. Different perspectives make our product and our team stronger.
We like to keep things simple and transparent. Here's what you can expect:
Round 1 - Introduction & mutual fit An informal conversation where we get to know each other. We'll talk about your experience, what excites you, and give you a clear picture of OPENRED and the role.
Round 2 - Deep dive & technical discussion A more in-depth session with the team. We'll go deeper into your experience with data pipelines, cloud, and AI/ML. This can include discussing past projects or walking through a practical case.
Final step - Offer & alignment If there's a strong match, we'll discuss the employment terms, answer any remaining questions, and align on expectations from both sides.
We know not everyone checks every box. If this role excites you, we'd still love to hear from you.