Information Security Engineer (Data Engineer - IAM Data Lake)
Role details
Job location
Tech stack
Job description
We are seeking an experienced Information Security Engineer with strong data engineering expertise to support our IAM Data Lake initiatives. In this role, you will contribute to moderately complex engineering efforts, participate in large-scale planning, and collaborate closely with cross-functional partners to deliver secure, scalable data solutions on Google Cloud Platform (Google Cloud Platform).
You will analyze engineering challenges, provide technical recommendations, and support the implementation of secure data pipelines, ensuring alignment with organizational policies, compliance requirements, and best practices. Responsibilities
- Contribute to moderately complex Information Security Engineering initiatives and deliverables.
- Analyze technical challenges and provide well-informed solutions based on variable factors and security requirements.
- Support the design, development, and optimization of IAM Data Lake solutions on Google Cloud Platform (Google Cloud Platform).
- Build and maintain batch and streaming data ingestion pipelines using Google Cloud Platform-native tools.
- Collaborate with internal teams to address engineering issues, improve processes, and ensure adherence to security standards.
- Apply knowledge of compliance frameworks, security policies, and engineering best practices.
- Work with engineering partners to design scalable, secure data architectures and consumption patterns.
Requirements
- 4+ years of Information Security Engineering experience, or equivalent experience through work, consulting, military service, or education.
- Proven experience designing and developing data lake architectures on Google Cloud Platform.
- Hands-on expertise with big data technologies including PySpark, Hadoop/HDFS, and columnar data formats (Parquet, Avro, ORC).
- Strong understanding of:
- Google Cloud Platform bucket architecture, naming standards, lifecycle management, and IAM access controls
- Pub/Sub-based streaming ingestion and event-driven architectures
- Incremental ingestion and CDC (Change Data Capture) patterns
- Data consumption mechanisms including APIs, curated datasets, and views
- Experience building production-grade batch and streaming pipelines.
Technical Skills (Required & Preferred)
Category Skill Required Importance Experience xms-USIT Airflow Yes 1 2-4 years xms-USIT API Development Yes 1 2-4 years xms-USIT CI/CD Yes 1 2-4 years xms-USIT Data Modeling Yes 1 4-6 years xms-USIT Data Pipelines Yes 1 2-4 years xms-USIT Data Processing Yes 1 4-6 years xms-USIT Google Cloud Platform Yes 1 4-6 years xms-USIT PySpark Yes 1 4-6 years xms-USIT Hadoop Ecosystem No 2