Senior Geospatial Analytics Engineer
Role details
Job location
Tech stack
Job description
ICEYE's Flood Solutions exist to help people make better decisions when floods happen. We strive to provide the most accurate information before, during and after the flood events.
You will be driving the analytics development within ICEYE Flood Solutions, ensuring we consistently turn multisource flood observations into one consistent flood extent and depth output that customers can trust in real conditions, including cloud cover, darkness, and other visibility-limiting environments.
The outcomes are customer deliverables that supports the full flood response lifecycle, for example:
- Early warning for flood prone areas
- Rapid situation awareness, so responders understand what is happening and where
- Search and rescue support, by highlighting areas likely to be impacted
- Damage assessment and claims workflows, by quantifying where flooding occurred and how severe it was
- Communication to affected residents or policyholders, with clear, explainable outputs
- Resource allocation, so response efforts go where they matter most
- Flood risk management, by improving the evidence base for future mitigation
You will ensure these outputs are good enough for real decisions. They need to ship reliably under time pressure, be explainable and consistent, and keep improving as coverage and use cases expand.
You will lead the analytics end-to-end: translate customer & user needs into analytical requirements, define assumptions and acceptance criteria, drive validation, and guide the team's analytical trade-offs.
You will ride the elevator in a practical sense. You can work tactically in the core algorithms and data pipelines, and you can also operate at the product and operational level by shaping requirements, aligning stakeholders, and making clear "good enough" decisions that keep delivery moving., * You have deep domain knowledge about different types of floods on a global scale. In practice you know what it takes to map a pluvial flood in urban Japan or a coastal flood in Florida accurately.
- You enjoy going deep on algorithms and data quality, and you also care about what the user actually needs and can act on.
- You have shipped analytics into production and you think in terms of validation, failure modes, and operational reality, not just model quality.
- You can make trade-offs explicit. You know when to improve accuracy and when to ship good enough with clear limitations.
- You like turning recurring pain into better defaults (validation tooling, test datasets, pipelines, runbooks) that others actually adopt., We don't just do one-off data science projects, but we rather build analytical libraries, workflows and products. We focus on automation, validation and continuous improvement.
Flow over ceremony (direction we are accelerating toward)
We are moving toward flow. Finish over start. Limit WIP. Remove blockers early. Ship in small increments. We use flow and reliability signals (lead time, deployment frequency, MTTR) to steer improvements.
Paved paths with escape hatches (direction we are standardizing)
We standardize the basics so teams can move fast safely. Templates, pipelines, and guardrails should make the safe way the easy way. Escape hatches exist when context demands it. We improve paved paths based on real usage and friction., * Improve how we initiate analysis, reduce false activations, and decide what constitutes an event worth tracking
- Improve peak and end-of-event logic so releases converge toward maximum impact with fewer surprises
- Strengthen release readiness under different delivery expectations, for example fast updates early and better accuracy later, * Define and evolve acceptance criteria and quality tiers so good enough is explicit and repeatable
- Improve depth and extent quality in hard conditions, including dense urban areas, complex terrain, and variable DEM quality
- Make confidence and limitations clear in outputs and release notes so customers can act safely, * Ensure deliverables are consistent and easy to integrate, including depth rasters (GeoTIFF), extent vectors (GeoPackage/GeoJSON), metadata (JSON), release notes and supporting artifacts
- Improve robustness, runtime, and failure handling so we deliver under time pressure, * Strengthen how we combine SAR observations with supporting evidence sources, for example gauge data and other signals
- Improve how evidence affects activation decisions, quality, and confidence communication, * Translate product needs into analytical specifications, assumptions, and acceptance criteria, and validate that they are met
- Make trade-offs explicit and documented using lightweight decision records and clear assumptions
- Prioritize analytical improvements with the Product Manager and team technical leadership, * Design, implement, and productionize analytical improvements in Python and geospatial tooling
- Build validation and regression checks so quality is measurable and repeatable, not tribal
- Review analytical changes with a high bar for clarity, correctness, and maintainability, * Turn recurring pain into better defaults: test datasets, validation tooling, pipeline improvements, analysis runbooks
- Improve operability: signals that matter, faster diagnosis, fewer recurring incidents
- Use AI-assisted workflows (Cursor, ChatGPT, Claude Code) to accelerate routine work while staying accountable for correctness, security, and quality, * You ship a meaningful analytical improvement into production that measurably improves output quality, delivery reliability, or delivery speed
- You tighten one critical part of the event workflow (activation, peak/end-of-event, release readiness) with clearer criteria and fewer surprises
- You introduce one repeatable validation or regression mechanism that improves confidence without slowing delivery
- You deliver one adoption-ready Enablement output the team actually uses (validation tool, test dataset, template pipeline, runbook)
Requirements
Do you have experience in Python?, Floods are complex. Inputs are uncertain, terrain data quality varies, and customers still need answers under time pressure. The work is to turn multi-source observations into outputs that are accurate enough for decision-making, with clear confidence and known limitations.
This role is for someone who wants real ownership. You will shape the product's analytical direction and ship improvements into production, not just prototypes., * Expertise in hydrology, geosciences, geography, or a related geospatial field, with proven ability to analyze and understand complex, real-world spatiotemporal systems.
- Senior-level experience delivering production-grade systems (typically 7+ years or equivalent)
- Strong Python skills, and experience making analytics production-grade (tests, reproducibility, performance, failure handling)
- Practical geospatial competence: you possess sufficient remote sensing (SAR, optical) and geoinformatics knowledge, you know your way around rasters and vectors formats, CRS concepts, and you know how to ship geospatial deliverables (GeoTIFF, GeoPackage/GeoJSON/GeoParquet) with clear metadata
- Experience owning quality criteria and validation for analytics-heavy products, including uncertainty and confidence communication
- Product thinking: you can connect user needs and operational constraints to analytical choices and acceptance criteria
- Operational mindset: you design for resilience, 'debuggability', and supportability, and you improve systems based on incidents and real use
- Office collaboration: you welcome working 3 days per week in the Espoo office and you thrive in direct collaboration, * Flood modelling, flood forecasting or other natural catastrophe analytics experience
- Geospatial Machine Learning experience
- AI leverage with judgment: you use tools like Cursor, ChatGPT, or Claude Code to speed up routine work, you know how to provide context and constraints to LLMs, and you verify outputs properly
- Experience scaling geospatial delivery (versioning, schemas, APIs, secure delivery)
- Familiarity with geospatial and Earth observation standards used for interoperability and scalable data access, for example STAC, OGC APIs, and Zarr
- Familiarity with Kubernetes, Docker, and infrastructure-as-code in a product team context
- Experience in insurance sector, risk modelling or disaster response related to floods, * Panel interview
- Task presentation