Senior Software Engineer - Entity Metadata Ingestion and Distribution (EMID - Knowledge Graph)

Bloomberg
Charing Cross, United Kingdom
3 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Charing Cross, United Kingdom

Tech stack

Java
Artificial Intelligence
Apache HTTP Server
Data Governance
Data Systems
Query Languages
Web Development
Distributed Systems
Drools
Python
Linked Data
Metadata
Raw Data
Semantic Web
Software Engineering
SPARQL
Spark
Knowledge Representation
Kafka
Data Management
Front End Software Development
Data Pipelines

Job description

The Entity Metadata Ingestion and Distribution (EMID) team leads a company-wide effort to build scalable, interoperable linked data systems. Our mission is to aggregate and distribute metadata to support entity disambiguation across Bloomberg. Our data pipelines process over 10 million daily updates from streaming endpoints and cloud-hosted files, with enrichment and delivery latencies averaging just 700 ms per record. The resulting data underpins billions of data points utilised by applications throughout Bloomberg including but not limited to Trading Platforms and AI.

Having made excellent progress on our initial milestones, we're now expanding into the next phase: Transforming raw data into interconnected knowledge. We are building an inference platform for scalable management and execution of data-inferencing based on semantic models and user-defined rules to enrich raw datasets. Our ontology-based inferencing will also enable context-aware query and discovery, allowing users to explore implicit relationships and linked data patterns within Bloomberg's enterprise knowledge graph., We're seeking a Senior Full-Stack Software Engineer with strong expertise in scalable, distributed system design to help build a new inference platform from the ground up. In this high-impact role, you'll have the opportunity to influence key technical decisions and build a foundational system that will power products and workflows across the company.

We'll trust you to:

  • Design, build, and scale core components of our semantic reasoning platform-enrichment rule engines and inference capabilities
  • Integrate reasoning capabilities with the enterprise knowledge graph to enable advanced querying and discovery
  • Collaborate with a broad set of stakeholders-domain experts, content providers, and product teams-to support diverse inference needs
  • Ensure reliability, scalability, and performance of inference infrastructure in high-throughput production environments
  • Evaluate and adopt the right technologies to deliver powerful, scalable inference over enterprise knowledge graphs

Requirements

  • Hands-on experience in software engineering, with a strong background in designing and building distributed systems or data platforms.
  • Proficiency in Python, Java and Micro frontend web development with demonstrated ability to write robust, production-quality code.
  • Hands-on experience with knowledge graph and semantic web technologies e.g. RDF, OWL, SHACL, SPARQL
  • Knowledge of one or more rule-based and semantic reasoning tools and frameworks (e.g., Apache Jena, Drools, OWL reasoners such as Pellet or HermiT)
  • Experience working with large-scale data systems such as Spark, Kafka, or similar.
  • Strong understanding of graph data models and query languages (e.g., SPARQL, Cypher).
  • Excellent communication skills and ability to collaborate across interdisciplinary teams.

We'd love to see:

  • Familiarity with knowledge representation and linked data best practices.
  • Understanding of data governance and model change management.

About the company

Discover what makes Bloomberg unique - watch our for an inside look at our culture, values, and the people behind our success.

Apply for this position