At National Grid Partners, we invest in Decarbonization, Decentralization, and Digitization of the broader energy infrastructure. We back founder-centric, tech-differentiated startups that have the potential to be category creators.

Enterprise cloud computing is revolutionizing how businesses manage costs and improve efficiency. This transformation is helping businesses stay competitive in the marketplace by leveraging scalable software solutions. The statistics are massive (if not startling):

  • 67 percent of enterprise infrastructure is cloud-based.
  • 82 percent of enterprise workloads will reside on the cloud.
  • More than 40 zettabytes of data will be flowing through cloud servers and networks. A zettabyte is one trillion megabytes!
  • Usage, on the other hand, is not slowing down, and development is getting sophisticated: the average person uses 36 cloud-based services every single day, ranging from Facebook to Dropbox to online banking.

Given these numbers, I was surprised when Reed Sturtevant (The Engine) and Todd Graham introduced me to Sync Computing, a cloud computing optimization startup from MIT’s Lincoln Lab. Truth be told, my initial reaction was “so, what’s the problem? The cloud workloads need to be optimized?”

Cloud Workloads: The Good, the Bad and the Ugly

Since at least 2016, investors have had a ringside-view of the growing cloud market. Global spend on public cloud services is roughly $480+ billion each year. But there’s more to the story. Running data analytics and machine learning on the cloud is expensive, complex and time-consuming. Ever-changing code, data and infrastructure can cause significant delays, exploding costs, and/or crashed jobs.

Optimizing the existing use of cloud continues to be the industry’s top initiative. Organizations are struggling to handle skyrocketing cloud costs, with estimates indicating organizations waste 70 percent of their cloud spend.

Artificial Intelligence/Machine Learning (AI/ML) workloads have hyperscaled to a point where enterprises need hundreds of Terabytes per second (TB/s) in bandwidth and compute throughputs in the Peta operations per second (Petaops) range. Distribution and parallelization on both the training and inference side is intensely complex to do. Petascale computing is capable of performing one quadrillion – one million billion – operations per second. A Terabyte is 1,000 Gigabytes.

Distributed workloads are the way to go. Modern workloads consist of thousands of tasks that are optimally assigned over hundreds of geographically spread out resources – compute, storage and network – while meeting strict cost and runtime constraints. But scheduling and optimizing workloads over these highly distributed resources falls into the category of ‘NP-hard’ – the mathematical challenge when there’s no way to obtain and optimize the perfect solution in a finite amount of time, even with a reasonably high number of resources.

Further, the environmental benefits of optimization are often ignored.  The resources used in cloud computing are a major consumer of materials and energy (for both computation and cooling). Through optimization, the overall usage of computing resources declines. This translates into savings on both energy and the materials used for cloud computing (like chips and energy storage). This is relevant in helping support carbon footprint reduction.

While cloud vendors help a business measure, report and reduce its cloud carbon footprint, the key is how to do this easily and efficiently.

Sync Computing: The Platform for Auto-optimizing Your Cloud Needs

Sync Computing has developed the world’s first distributed scheduling engine (DSE) designed specifically for this challenge. At its heart is a unique optimizer capable of finding the best way to distribute workloads in milliseconds rather than hours or days. The ability to make these calculations in real-time and at relevant scale serves as the foundation to unlocking an untapped well of additional computing power. And as workloads continue to grow, we see this becoming a cornerstone for future computing environments.

The key to Sync Computing’s technology is its mathematical modeling of the computing environment, based on the log of a previous run. The simplicity and computational efficiency of this approach allows huge logs and communication patterns across multiple processor cores to be quickly analyzed, so optimization problems can be framed seamlessly.

While Sync is starting with Apache Spark, the company is working on extending these features to other cloud ecosystems like Airflow, TensorFlow, PyTorch, and Kubernetes. It aims to be completely agnostic to the cloud vendor and data pipeline.

Sync’s optimization works, but don’t take my word for it: try it to believe it!

My investment thesis around Energy Transition and Digitization revolves around data infrastructure and operational performance. The next generation of enterprise critical infrastructure will need to build on cloud, data analytics, and cybersecurity on the one hand (“within-the-enterprise infrastructure”); and AI/ML, IoT, and new emerging DLT (Distributed Ledger) technologies on the other hand (“outside-of-the-enterprise infrastructure”) to bring energy-centric enterprises into the decarbonized and decentralized future.

We are excited about what new pathways Sync Computing opens for efficient, easy to use, and responsible infrastructure.

Raghu Madabushi is a Director at National Grid Partners, investing in early-stage companies in the broad enterprise software vertical. He has 20+ years of experience with technology, capital markets, and IP/innovation. He previously invested in deep tech and industrial infrastructure at SRI Ventures and GE Ventures; managed a large portfolio of open-source technology projects at the Linux Foundation; and headed early-stage startup investing at Intellectual Ventures’ Invention Development Fund. Raghu also has held buy-side and sell-side roles at Wall Street firms and brings extensive experience in hardware and software design (Texas Instruments, Intel, Cadence Design Systems). He received an MBA in Finance and Investments from Southern Methodist University and an MS in Computer Engineering from Iowa State University, and he currently serves as a Kauffman Fellow.