Kafka Developer
Role details
Job location
Tech stack
Job description
- Developed (in Python - not Java) data ingestion pipelines using Kafka Producer-Consumer APIs? Refer Kafka documentation Chapter 2.1 & 2.2.
2.Implemented data ingestion pipelines using Kafka Connect (Debezium and/or JDBC)? Refer Kafka documentation Chapter 2.1 & 2.4 and 8.
(It means built connectors to extract data from sources, applying minor transformations at topics level, and load data into destinations. Sources can be databases, streams, rest/soap APIs and destinations can be databases, flat files)
3.Configured event streams for data platforms ?
Refer Kafka documentation Chapter 3
=========================================== We are offering one of the most challenging & exciting work on distributed events streams. You shall be working on sophisticated platforms, products and applications
We are looking for developer with real passion for data ingestion pipelines. This is a specialist and individual contributor role. Product development experience preferably at a startup or a lean team is desired, We are looking for engineers with real passion for data pipelines with actual hands-on experience developing data application on Kafka. You would be required to work with our data science team on development of several data applications.
Requirements
Do you have experience in ZooKeeper?, This is an immediate requirement. We shall have an accelerated interview process for fast closure - you would required to be proactive and responsive, 1. Must have hands-on experience working on Kafka connect using schema registry in a very high volume environment
- Must have worked with JDBC connectors and APIs
- Must have worked on Kafka topics, Kafka brokers, zookeepers, and Kafka Control center
- Must have worked on on AvroConverters, JsonConverters, and StringConverters
- Must have worked on Debezium source/sink connectors with CDC implementation
- Must have worked on producer API custom logics
- Must have worked on consumer API complex transformations
- Must have worked on setting optimal configuration for broker, topics, producer, consumer, connect, stream and admin
- Must have deployed atleast ten source/sink connectors in production
- Must have worked on distributed computing, parallel processing and large-scale data management
- Must have integrated Kafka with RabbitMQ/ Reddis/ AWS SQS
Preferred
- Good to have worked on Admin API, Connect API and Stream API
- Good to have worked on development of data ingestion platform
- Good to have worked on vinaigrette KSQL, KStream
- Good to have worked on confluent connectors
- Good to have build connectors from scratch using Java
Benefits & conditions
Pulled from the full job description
- Flexible schedule, Competitive compensation, You shall be working on our revolutionary products which are pioneer in their respective categories. This is a fact.
We try real hard to hire fun loving crazy folks who are driven by more than a paycheque. You shall be working with creamiest talent on extremely challenging problems at most happening workplace