
Senior Full Stack FinOps Engineer
Job Description
We engineer the core services and experiences of a next-generation multi-cloud FinOps platform—turning TB-scale billing/usage data into trusted, fast, explainable cost intelligence that enables allocation, optimization, and operational accountability.
You’ll own architecture and critical components end-to-end (ingestion → normalization → analytics → APIs → UI), raise engineering standards, and partner closely with FinOps practitioners and product leadership to ship measurable impact.
What Success Looks Like (Performance Objectives)
1) Multi-Cloud Ingestion (E.g. AWS / Azure / GCP)
Within 180 days, we will deliver ingestion services that process high-volume billing data with clear SLAs.
Key outcomes
AWS ingestion supports CUR files in S3, Cost Explorer APIs, Cost Categories, RI/Savings Plans coverage.
Azure ingestion supports Cost Management exports/APIs, EA/billing constructs where applicable.
GCP ingestion supports BigQuery billing export tables and relevant cost APIs.
Ingestion can handle TB-scale datasets with partitioning/compaction strategies and repeatable backfills.
Data quality checks and lineage are built-in (schema drift detection, late-arriving data handling, idempotency).
Success measures
≥ 99% ingestion job success rate (or agreed SLA)
Known error budget and automated retries/alerts
Backfills complete within agreed time windows
Cost data freshness targets are met (e.g., daily/hourly depending on provider constraints)
2) Unified Cost Data Model & Normalization
Within 6-9 months, we continue the story with the ability to deliver a normalized, cross-cloud cost model enabling allocation and analytics consistency.
Key outcomes
A unified schema that resolves cross-cloud differences (accounts/subscriptions/projects, resource identity, usage types).
Support for allocation via tags/labels, custom dimensions, shared cost modeling, and business mappings.
A detailed semantic layer (definitions for amortized vs. blended, commitment allocation, etc.).
A versioned approach for schema evolution and backward compatibility.
Success measures
Allocation accuracy validated with FinOps customers
Clear “source of truth” definitions and reconciliation process
Schema changes do not break downstream dashboards/queries
3) High-Performance Analytics Engine (Interactive + Explainable)
Within 9–12 months, we will enable fast queries for “where did the spend go?”
Key outcomes
Analytics pipelines optimized for interactive exploration (e.g., drill-downs, group-by dimensions, cost drivers).
Support for anomaly detection workflows (rules-based and/or ML-assisted is a plus).
Query patterns built for performance at scale (partitioning, clustering, materialization strategy).
Integration with warehouse/Lakehouse depending on your stack.
Success measures
Key dashboards load within target latency (e.g., <2–5 seconds for common views)
Defined SLAs for compute cost, query performance, and data freshness
Reduction in support critical issues tied to data discrepancies
4) Product-Grade Full Stack Delivery (Backend + UI)
Within 90–180 days, end-user capabilities will be shipped to accelerate FinOps workflows.
Key outcomes
Backend services in Python, expose stable APIs for analytics and allocations.
TypeScript UI (React/Next.js) supports high-performance tables, explorers, and filters.
Reusable API patterns and consistent contract/versioning.
Thoughtful UX for FinOps use cases: allocation review, cost driver exploration, commitment coverage insights.
Success measures
Feature adoption and repeat usage in pilot teams
Reduction in manual reporting effort for FinOps teams
Performance benchmarks meet agreed targets
5) Engineering Standards, Reliability & Cost Efficiency!
Ongoing, we will raise the platform bar across security, observability, and operational perfection.
Key outcomes
Standards for microservices, API architecture, data modeling, distributed system patterns.
Observability: tracing/logging/metrics, SLOs, alerting, on-call readiness where applicable.
Security and governance appropriate for billing and usage data (RBAC, audit, encryption, least privilege access).
Cost-aware engineering: FinOps principles applied to the platform itself.
Success measures
Production incidents reduced over time, with blameless postmortems and preventive action
Infrastructure costs supervised and optimized with clear ownership
Engineering guidelines accepted by the team and reflected in PR quality
Your Core Responsibilities (What You’ll Do)
a) Architect and build ingestion, normalization, and analytics components for multi-cloud billing data.
b) Lead reviews and establish engineering practices for reliability, scalability, and cost efficiency.
c) Deliver high-quality backend services and modern UI experiences for FinOps users.
d) Partner with FinOps Reporting and Analytics, Product, and Cloud Engineering to build roadmap and ensure correctness.
e) Mentor engineers through code reviews, pairing, design guidance, and technical standards.
The Environment (Representative Tech Stack)
(We care more about your ability to engineer outcomes than exact tool matches.)
- Languages: Python, TypeScript
- Backend: FastAPI / Flask / Django; microservices; event-driven patterns.
- Data/Compute: Spark/PySpark/Dask; serverless ETL (Glue/Synapse/Dataflow)
- Storage/Warehouse: Snowflake, BigQuery, Redshift, Synapse; Lakehouse patterns
- Infra: Docker/Kubernetes; IaC (Terraform/CDK/Pulumi)
- Observability: OpenTelemetry-style tracing, metrics/logs, SLOs
- Clouds: Cloud billing constructs and APIs
Candidate Profile (Evidence of Comparable Accomplishments)
You’re likely a strong fit if you’ve done several of these:
a) Develop production ingestion pipelines for large datasets (TB-scale or high-frequency) with quality controls and SLAs.
b) Crafted multi-tenant or multi-account analytics platforms with consistent schemas and performance constraints.
c) Implemented cost/usage analytics or similar domains (telemetry, observability, billing, metering, FinOps, data platforms).
d) Shipped full stack features: APIs + UI that users rely on daily (not just prototypes).
e) Led architecture decisions and influenced standards across a team (architecture design documents, reviews, migration strategy).
f) Worked with CUR/exports, commitment models, allocation, tagging/labels, reconciliation.
Critical Proficiencies (What “Extraordinary” Looks Like in the Role)
1. Systems thinking: sees end-to-end flow and optimizes for reliability + cost + usability.
2. Data correctness perspective: makes data credible, explainable, and defensible.
3. Product engineering: builds for users, not just infrastructure.
4. Pragmatic architecture: chooses simple, scalable approaches; avoids over-engineering.
5. Influence & mentorship: improves via coaching, standards, and streamlined communication.
Career Stage:
ManagerLondon Stock Exchange Group (LSEG) Information:
Join us and be part of a team that values innovation, quality, and continuous improvement. If you're ready to take your career to the next level and make a significant impact, we'd love to hear from you.
LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth.
Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions.
Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce.
We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs.
You will be part of a collaborative and creative culture where we encourage new ideas. We are committed to sustainability across our global business and we are proud to partner with our customers to help them meet their sustainability objectives. Our charity, the LSEG Foundation provides charitable grants to community groups that help people access economic opportunities and build a secure future with financial independence. Colleagues can get involved through fundraising and volunteering.
LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives.
Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject.
If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.