Intermediate Java Developer (Big Data)
Role details
Job location
Tech stack
Job description
Joining the Reporting product line, you would work as a member of a small, highly focused team, responsible for delivering backend services for a highly scalable reporting and analytics platform, using leading edge technologies. This is an opportunity to work in an environment that encourages creative thinking and autonomy. We encourage our developers to think beyond a single component to build complete system solutions. Challenge yourself by learning new technologies, and apply your skills across our different projects and application domains. If you are committed to code that is clean, well-tested, well-reviewed, performant and secure then you'll fit in around here
Tech Stack:
- Micro-services Container Platforms (OpenShift, Kubernetes, CRC, Docker)
- File formats (Avro, Parquet, Orc)
- Large scale data processing (Kafka)
- Large scale data platforms (Hadoop, Trino, Spark)
- Dependency injection frameworks (Spring, Guice)
- Splunk
- CI/CD Build tools (Maven, Git, Jenkins)
- Frameworks: Vert.x
- Text search engines (Lucene, ElasticSearch), * Develop ETL and ELT jobs and processes.
- Support data analysis and design efforts within the wider team
- Migrate existing services to microservices, with the goal of reducing complexity at the design and architecture level
- Write unit and integration tests for your Java code
- Collaborate with testers in development of functional test cases
- Develop deployment systems for Java based systems
- Collaborate with product owners on user story generation and refinement
- Monitor and support the operation of production systems
- Participate in knowledge sharing activities with colleagues
- Pair programming and peer reviews
Requirements
Do you have experience in Test-driven development?, o Minimum 3 years of Java development experience in an Agile environment, building scalable applications and services with a focus on big data solutions and analytics o 2+ year experience in developing ETL / ELT processes using relevant technologies and tools. o Experienced in working with data lakes and data warehouse platforms for both batch and streaming data sources. o ANSI SQL experience or other flavours of SQL o Experience of unstructured, semi-structured and structured data processing. o A good understanding of ETL/ELT principles, best practices and patterns used. o Experienced in some big data technologies such as Hadoop, Spark and other Apache platform products o Experience with RESTful services o Passion for Test Driven Development o CI/CD o Exposure to data visualisation and Business Intelligence solutions.
Benefits & conditions
Though we offer competitive compensation and benefits and all the other perks one would expect from an established company, we are not your typical technology company. Global Relay is a career-building company. A place for big ideas. New challenges. Groundbreaking innovation. It's a place where you can genuinely make an impact - and be recognized for it.
We believe great businesses thrive on diversity, inclusion, and the contributions of all employees. To that end, we recruit candidates from different backgrounds and foster a work environment that encourages employees to collaborate and learn from each other, completely free of barriers.