AWS Data Engineer (Palantir Foundry) - Contract Inside IR35 - London or Leeds, UK

Cactus IT Solutions UK Ltd
Manor Park, United Kingdom
2 days ago

Role details

Contract type
Contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English

Job location

Manor Park, United Kingdom

Tech stack

Amazon Web Services (AWS)
Amazon Web Services (AWS)
Azure
Continuous Integration
Information Engineering
Data Governance
Data Security
Database Development
Github
Hive
Identity and Access Management
JSON
Python
SQL Databases
Parquet
S3 Bucket
Software Repository
Gitlab
Data Lake
PySpark
Enterprise Integration
Integration Frameworks
Amazon Web Services (AWS)
Terraform
Databricks

Job description

Role Overview

Configure and support secure integration between Palantir Foundry, Databricks on AWS and AWS-hosted data sources. The role covers data held in S3, registered through Databricks Hive Metastore, governed or exposed through Unity Catalog, and consumed by Foundry using the Databricks connector or approved ingestion patterns.

Key Responsibilities

  • Configure and support Palantir Foundry integration with Databricks on AWS.
  • Set up and validate the Foundry Databricks connector for approved SQL Warehouses, tables, views and Unity Catalog-governed objects.
  • Support end-to-end access from AWS S3 sources through Hive Metastore and Unity Catalog into Foundry datasets or products.
  • Configure Foundry Data Connection, datasets, syncs, projects, permissions and lineage as required.
  • Work with AWS teams to define IAM roles, S3 bucket policies, KMS permissions and approved cross-account access patterns.
  • Troubleshoot Foundry connector failures, Databricks authentication issues, Unity Catalog grants, schema mismatches and query failures.
  • Document lineage across AWS S3, Hive Metastore, Unity Catalog and Foundry consumption layers.
  • Work with platform, data engineering, architecture, security and governance teams to deliver a secure and supportable integration.
  • Hands-on Palantir Foundry experience, including connectors, Data Connection, datasets, syncs, projects and permissions.
  • Experience configuring or supporting the Palantir Foundry Databricks connector.
  • Strong Databricks on AWS experience, including SQL Warehouse, clusters, tables, views and access controls.
  • Good understanding of Hive Metastore and Unity Catalog, including catalogs, schemas, grants, storage credentials and external locations.
  • Strong AWS IAM, S3, KMS, bucket policy and cross-account access experience.
  • Terraform or similar Infrastructure as Code experience.
  • Strong ability to troubleshoot access, schema, query, sync and connectivity issues across Foundry, Databricks and AWS.
  • Experience with Foundry Virtual Tables, Ontology, Workshop, Pipeline Builder or Code Repositories.
  • Experience with Delta Lake, Parquet, Iceberg, CSV and JSON.
  • Data governance, lineage, secure data-sharing and access approval experience.
  • Python, PySpark or SQL development experience.
  • CI/CD experience using GitLab, GitHub Actions, Azure DevOps or similar.
  • Configured and validated Foundry Databricks connector.
  • Integration design covering AWS S3, Hive Metastore, Unity Catalog and Foundry consumption.
  • AWS IAM, S3 and KMS access design for source data.
  • Validated access to approved Databricks tables, views or SQL endpoints.

Required Skills and Experience Desirable Skills Key Deliverables Lineage, access-control and operational support documentation

Requirements

  • Good understanding of Hive Metastore and Unity Catalog, including catalogs, schemas, grants, storage credentials and external locations.
  • Strong AWS IAM, S3, KMS, bucket policy and cross-account access experience.
  • Terraform or similar Infrastructure as Code experience.
  • Strong ability to troubleshoot access, schema, query, sync and connectivity issues across Foundry, Databricks and AWS.
  • Experience with Foundry Virtual Tables, Ontology, Workshop, Pipeline Builder or Code Repositories.
  • Experience with Delta Lake, Parquet, Iceberg, CSV and JSON.
  • Data governance, lineage, secure data-sharing and access approval experience.
  • Python, PySpark or SQL development experience.
  • CI/CD experience using GitLab, GitHub Actions, Azure DevOps or similar.
  • Configured and validated Foundry Databricks connector.
  • Integration design covering AWS S3, Hive Metastore, Unity Catalog and Foundry consumption.
  • AWS IAM, S3 and KMS access design for source data.
  • Validated access to approved Databricks tables, views or SQL endpoints.

Required Skills and Experience Desirable Skills Key Deliverables Lineage, access-control and operational support documentation

Apply for this position