Principal Data Platform Architect
Role details
Job location
Tech stack
Job description
We are seeking a Principal Data Platform Architect with proven experience leading enterprise data platform migrations from Palantir Foundry to Databricks. This role will drive the strategic design, migration, and optimization of the enterprise data ecosystem, transitioning data pipelines, ontologies, and transformation workflows from Palantir Foundry to a scalable Databricks Lakehouse architecture. The successful candidate will provide architectural leadership and technical direction, ensuring data integrity, governance, performance, and long-term platform sustainability. You will collaborate closely with data engineering, DevOps, and business stakeholders to modernize the data platform, establish best practices, and deliver a secure and high-performance migration.
Responsibilities
Define and execute the end-to-end migration roadmap from Palantir Foundry to Databricks, including target architecture, data models, security, and governance frameworks.
Analyze existing Foundry pipelines, datasets, and ontologies, and redesign them into scalable Databricks (Delta Lake) ETL/ELT workflows using PySpark and SQL.
Implement validation frameworks to ensure complete data accuracy, consistency, and lineage throughout the migration process.
Optimize Spark workloads, cluster configurations, and storage strategies to ensure high performance and cost-efficient operations in Databricks.
Collaborate with data engineers, platform teams, and business stakeholders to minimize migration risk, ensure smooth cutover, and maintain production stability.
Requirements
Do you have experience in Spark?, Must have
Advanced Databricks Experience. Deep knowledge of Databricks architecture, Delta Lake, job orchestration, cluster management, and performance tuning.
Proven Migration Experience (Palantir Databricks). Demonstrated experience leading or executing platform migrations, including pipeline conversion, data model redesign, and production cutover.
Expert-Level PySpark & Python Skills. Strong ability to design, optimize, and refactor distributed data processing workflows.
Advanced SQL & Data Modelling Expertise. Experience in dimensional modeling, lakehouse architecture patterns, and query optimization.
Cloud Platform Experience (Azure preferred). Hands-on experience deploying and managing data platforms in cloud environments, including storage, security, and networking considerations.
Nice to have
Hands-on Expertise in Palantir Foundry. Proven experience with Foundry pipelines, ontologies, data lineage, transformations, and platform governance.
Understanding of the Investment Data Domain.
Familiarity with Dynatrace or Datadog for system observability and monitoring.
Databricks certification, cloud certifications (Azure/AWS), or enterprise data architecture certifications.
About the company
Luxoft, a DXC Technology Company, (NYSE: DXC), is a digital strategy and software engineering firm providing bespoke technology solutions that drive business change for customers the world over. Luxoft uses technology to enable business transformation, enhance customer experiences, and boost operational efficiency through its strategy, consulting, and engineering services. Luxoft combines a unique blend of engineering excellence and deep industry expertise, specializing in automotive, financial services, travel and hospitality, healthcare, life sciences, media and telecommunications.