Data Scientist - (Research Development & Engineering area)
Role details
Job location
Tech stack
Job description
Kimberly-Clark is seeking a motivated and skilled data scientist to join our dynamic team. The ideal candidate will play a pivotal role in designing and building analytics solutions to facilitate informed decision-making. The developer will work closely with various R&D and DTS technology teams to design and implement scalable data pipelines, design analytics within R&D solutions, and ensure the accuracy and availability of data for analytics and reporting. Primary focus of the position is to design, develop, and maintain analytics solutions., Data Collection and Integration: * Collaborate with engineering and architecture teams to identify, collect, and harmonize data from various sources. * Design and develop ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) pipelines to process and curate data sets using technologies such as SQL Server, Azure Data Factory and Databricks.
Data Modeling and Warehousing: * Develop and maintain data models and data warehouses using platforms like SQL Server, Azure Data Factory, Snowflake, and Databricks. * Apply metadata-driven frameworks to ensure scalable data ingestion and processing.
Data Quality and Standards: * Implement data quality checks and validation frameworks to maintain high data standards. * Build and maintain data development standards and principles, providing guidance and project-specific recommendations.
Analytics and Reporting: * Build models that are interpretable, scalable, and meet business needs. * Develop visualizations to demonstrate the results of data models to stakeholders and leadership, leveraging Microsoft Azure technologies. * Test and validate analytics solutions to ensure data integrity and actual results meet expected results.
Collaboration and Mentorship: * Work with principal architect, product owners, solution engineers, business customers, and other key stakeholders to translate requirements into technical designs. * Mentor junior engineers and team members on data engineering techniques and best practices. * Train and build the talent of business users to maximize the return on investment of the analytics solutions.
Agile and DevOps Practices: * Use Agile methodologies and tools to deliver products in a fast-paced environment. * Collaborate with platform teams to design and build automated processes for pipeline construction, testing, and code migration.
Requirements
You perform at the highest level possible, and you appreciate a performance culture fueled by authentic caring. You want to be part of a company actively dedicated to sustainability, inclusion, wellbeing, and career development., Bachelor's degree required. * Fluency in English * 5+ years of experience in data engineering and designing, developing, and building solutions on platforms like. * Strong Proficiency with SQL, API, Power Query and Microsoft Power BI. * Experience with Data Factory and Data Bricks. * Strong knowledge with data collection, analyses, cleaning to build reports for stakeholders.
Nice to have: * Experience with Python. * Exposure to SQL Server, and Databricks, HANA, Snowflake and Teradata. * Proven track record in building and maintaining data pipelines, data warehousing, and data modeling. * Strong communication skills, both oral and written, with the ability to convey complex technical concepts to non-technical stakeholders. * Ability to work independently and collaboratively in a team environment. * Familiarity with DevOps tools and practices for continuous integration and deployment.