Software Engineer of Field Applications and Data Platform Integration
Role details
Job location
Tech stack
Job description
The Senior Software Engineer will play a critical role in the design, development, and evolution of a field and project management application used to support load estimation and power planning activities. This application enables field teams to collect, manage, and analyze power usage data for businesses, buildings, and other facilities requiring electrical capacity planning.
The organization is currently developing an on-premises, low-code application using OutSystems to replace manual processes and legacy systems, including Excel-based tracking. While this solution addresses immediate operational needs, the long-term vision is a scalable, modern web application built on Databricks with a JavaScript or TypeScript-based front end.
This role will bridge the current state and future state by continuing development of the existing OutSystems application while simultaneously enabling the next-generation web application through data ingestion, API development, and front-end engineering. The position is approximately 75 percent software engineering and 25 percent data engineering., * Take ownership of ongoing development and enhancement of the existing OutSystems low-code application used by field and project management teams.
- Design and implement user-friendly, accessible workflows and interfaces that support efficient field data collection and review.
- Collaborate with stakeholders to refine requirements and ensure the application meets operational needs.
Future-State Web Application
- Design and build a modern, scalable web application using JavaScript or TypeScript frameworks such as React, Angular, or Vue.
- Develop and integrate APIs to enable secure and efficient interaction between the web application and Databricks.
- Apply software engineering best practices, including modular design, version control, and automated testing.
Data Engineering and Integration
- Build and maintain Spark and PySpark pipelines to ingest data from field tracking systems, Excel files, and legacy sources into Databricks.
- Work within established Databricks schemas provided by the data architecture team.
- Ensure data quality, reliability, and performance of ingestion pipelines.
DevOps and Delivery
- Implement and adhere to CI/CD and versioning best practices.
- Support deployment, maintenance, and incremental improvement cycles for both low-code and pro-code solutions.
- Partner with cross-functional teams to align on technical standards and delivery timelines.
Requirements
*you must be authorized to work on apex's w2. we cannot provide sponsorship for this role now or in the future. *, * Strong experience with Spark and PySpark for data pipeline development.
- Hands-on experience with Databricks.
- Proficiency in Python for data and application development.
- Experience with low-code platforms, preferably OutSystems. Experience with PowerApps, SharePoint, or similar low-code tools is acceptable.
- Strong JavaScript and TypeScript skills and experience with modern front-end frameworks such as React, Angular, or Vue.
- Solid understanding of API design and integration.
- Working knowledge of CI/CD pipelines and source control best practices.
- UX design awareness with the ability to build intuitive, accessible user experiences., * Prior experience transitioning applications from low-code platforms to custom, scalable web architectures.
- Familiarity with on-prem and hybrid application environments.
- Experience supporting field-based or operational data collection systems.