Data AI Engineering Specialist
Role details
Job location
Tech stack
Job description
As a Data AI Engineering Specialist within the Architecture & Modernization team, you will be instrumental in building and maintaining the data infrastructure for our Data AI platforms. This role will involve hands-on development, data pipeline creation, and close collaboration with stakeholders across the organization. This role requires a self-starter with strong execution skills and the ability to work independently. You will be expected to not only execute on the current strategy but also contribute to its evolution. We value diversity of thought and are committed to building a team that reflects the diversity of our global community.
This is a hybrid position requiring a minimum of three days per week in the office. The role is based in NYC, and may also be based in Montreal for qualified candidates in Canada.
What you'll do in the role
- Develop and maintain data pipelines and ETL (Extract, Transform, Load) processes.
- Work with structured and unstructured data to ensure it is accessible and usable.
- Optimize data systems for performance and scalability.
- Implement data quality and data governance standards.
- Collaborate with stakeholders across technology and business units to understand their data needs and translate them into technical solutions and provide data-driven insights.
- Contribute to the documentation and knowledge sharing within the team, creating, and maintaining technical documentation and training materials.
- Participate in code reviews and contribute to the improvement of development processes.
- Contribute to the broader data architecture community through knowledge sharing, presentations.
Requirements
- 8 years+ of being a practitioner in data engineering or a related field.
- Proficiency in programming skills in Python
- Experience with data processing frameworks like Apache Spark or Hadoop.
- Knowledge of database systems (SQL and NoSQL).
- Experience working on Snowflake and Databricks.
- Experience on Snowflake Cortex will be really appreciated.
- Familiarity with cloud platforms (AWS, Azure) and their data services.
- Understanding of data modeling and data architecture principles.
- Experience with data warehousing concepts and technologies.
- Experience with message queues and streaming platforms (e.g., Kafka).
- Experience with version control systems (e.g., Git).
- Experience using Jupyter notebooks for data exploration, analysis, and visualization.
- Excellent communication and collaboration skills.
- Ability to work independently and as part of a geographically distributed team.
Nice to have
- Familiarity with data visualization tools (e.g., Tableau, Power BI).
- Familiarity with data governance and security best practices (e.g., data access control, data masking).
- Experience with Agile methodologies.
- Familiarity with data catalog and metadata management tools (e.g., Collibra).
- Familiarity with CI/CD pipelines and DevOps practices.
Benefits & conditions
Expected base pay rates for the role will be between $150,000 to $190,000 per year at the commencement of employment. However, base pay if hired will be determined on an individualized basis and is only part of the total compensation package, which, depending on the position, may also include commission earnings, incentive compensation, discretionary bonuses, other short and long-term incentive packages, and other Morgan Stanley sponsored benefit programs.
Morgan Stanley's goal is to build and maintain a workforce that is diverse in experience and background but uniform in reflecting our standards of integrity and excellence. Consequently, our recruiting efforts reflect our desire to attract and retain the best and brightest from all talent pools. We want to be the first choice for prospective employees.