Next time you open the Uber app to order some late night grub, rent a bicycle to get across town, or tip your driver, you can thank Maxim Fateev and Samar Abbas because, odds are, the workflows you kicked off are being executed by the software they wrote. Maxim and Samar are the co-creators of Cadence, an open source microservices orchestration engine built inside of Uber that powers some of the company’s most business-critical applications.
We were first introduced to Maxim and Samar in the spring of 2019 by a mutual acquaintance at Uber, although we began hearing about Cadence many months back at conferences where Maxim presented like Facebook @Scale and Uber’s own Open Source Summit. By the time we finally met Maxim and Samar, Cadence had permeated Uber, growing to enable dozens of diverse use cases from microservices and data pipeline orchestration, to CI/CD, to IT support ticketing and many more. In those early conversations, it was clear that we were speaking to two of the world’s leading experts on distributed, fault-tolerant computing. Maxim and Samar had over three decades of combined experience architecting systems like Amazon Simple Workflow Service and Microsoft’s Durable Task Framework, the technology which inspired and now powers Azure Durable Functions. Cadence was their fifth or sixth incarnation of this type of system.
Over the course of 2019, we spent months with Maxim and Samar, exploring the surface area of a potential commercial opportunity around Cadence. As the adoption of the open source project spread beyond Uber, out to some of the most sophisticated engineering organizations in Silicon Valley, it was a conversation with Mitchell Hashimoto, co-founder and co-CTO of Hashicorp, that helped confirm what we had suspected: the potential of this technology was massive. Mitchell recounted how Hashi’s engineers had discovered Cadence organically and had come to rely on it as a foundational building block of their cloud services. Mitchell finished the conversation saying, “any long-running cloud service should be built on something like this.”
That Fall, Maxim and Samar officially left Uber to found Temporal, raising a $6.75M Seed financing led by Amplify. Earlier this year, we were fortunate to be joined by our friends at Sequoia Capital who led Temporal’s Series A, bringing the company’s total raised to over $25M.
The core tenet behind Temporal is simple but has extremely powerful implications: what if you can abstract away all distributed state management from developers thereby allowing them to express their application as pure business logic? Transforming this idea into reality, however, is inordinately complex and requires solving many of the thorniest problems across the domains of programming languages, distributed systems and developer experience.
Today, the reality facing every developer is that even simple applications are, in fact, complex distributed systems. Each service in this distributed architecture is communicating across unstable network boundaries, constantly parsing JSON or reading from a Protobuf and persisting data so it can be read by another part of the system. To make their applications work, let alone work reliably, developers are forced to orchestrate stateless services, 3rd party APIs, databases, queuing systems and cron jobs. The end result is overwhelming complexity, with developers dedicating a majority of their code to working around consistency issues and infrastructure failures. Concepts like retries, timeouts, consensus and queue routing – previously the domain distributed systems PhDs – have become commonplace. At best, these Frankenstein architectures kill developer productivity; at worst they lead to critical reliability challenges which can threaten a business.
This is where Maxim and Samar stepped in and asked, “what if it didn’t have to be this way?” What if we could abstract away all this distributed systems cruft from application developers and let them focus on their code, not wrangling their cloud. And so they did.
Temporal is an open source microservices orchestration engine with a flexible programming model, enabling developers to express global, fault-tolerant applications as composable workflows written in their programming language of choice (today Java and Go are supported, with Python, Typescript and many others planned). The core innovation underlying Temporal is virtual durable memory that is not linked to any specific process and preserves the application’s complete state across all types of hardware and software failures. Developers are empowered to write invincible apps harnessing the full power of their favorite programming language while Temporal takes care of durability, availability, and scalability of the application.
Having seen and worked with dozens of commercial open source companies in Amplify’s history, we’ve never encountered a project that has won the trust of respected engineering organizations as quickly as Temporal. Though the project is still relatively nascent – Temporal v1.0.0 was announced just a few weeks ago – the response from developers has been overwhelming. Companies like Hashicorp, Box, Doordash, Checkr and dozens more have voted with their feet, entrusting Temporal with some of their most mission-critical workloads, ranging from payment processing to business process orchestration to infrastructure provisioning.
So with that, today we are so excited to announce Amplify’s investment in and support of Temporal as part of the company’s official launch.
If you’re a VP Eng or systems architect figuring out your microservices architecture, or if you’re an engineer, product designer, developer advocate or just someone who wants to shape the future of application development, please get in touch with Maxim (firstname.lastname@example.org) and Samar (email@example.com).