VESSL AI Secures $12M for its MLOps Platform That Aims to Cut GPU Costs by Up to 80%

VESSL AI Secures $12M for its MLOps Platform That Aims to Cut GPU Costs by Up to 80%

As businesses increasingly integrate artificial intelligence into their workflows and products, there is a growing demand for tools and platforms that make it easier to create, test, and deploy machine learning models. This category of platforms — known as machine learning operations, or MLOps — is already crowded, with startups like InfuseAI, Comet, Arrikto, Arize, Galileo, Tecton, and Diveplane, alongside offerings from incumbents like Google Cloud, Azure, and AWS.

Now, one South Korean MLOps platform, VESSL AI, is trying to carve out a niche by focusing on optimizing GPU expenses. It uses hybrid infrastructure that combines on-premise and cloud environments. The startup has raised $12 million in a Series A funding round to speed up its infrastructure development. This funding targets companies that want to develop custom large language models (LLMs) and vertical AI agents.

The company has 50 enterprise customers, including big names like Hyundai, LIG Nex1, a South Korean aerospace manufacturer, and TMAP Mobility, a mobility-as-a-service joint venture with Uber. Tech startups Yanolja, Upstage, ScatterLab, and Wrtn.ai are also part of its clientele. The company has strategically partnered with Oracle and Google Cloud in the U.S. It has over 2,000 users, co-founder and CEO Jaeman Kuss An told a tech publication.

An founded the startup in 2020 with Jihwan Jay Chun (CTO), Intae Ryoo (CPO), and Yongseon Sean Lee (tech lead). The founders previously worked at Google, mobile game company PUBG, and various AI startups. They aimed to solve a particular pain point An faced when developing machine learning models at a medical tech startup: the immense work involved in using machine learning tools.

The team found they could make the process more efficient and cheaper by leveraging a hybrid infrastructure model. The company’s MLOps platform uses a multi-cloud strategy and spot instances to cut GPU expenses by up to 80%. An noted that this approach addresses GPU shortages and streamlines the training, deployment, and operation of AI models, including large-scale LLMs.

“VESSL AI’s multi-cloud strategy enables the use of GPUs from various cloud service providers like AWS and Google Cloud,” An said. “This system automatically selects the most cost-effective resources, significantly reducing customer costs.”

VESSL’s platform offers four main features: VESSL Run, which automates AI model training; VESSL Serve, which supports real-time deployment; VESSL Pipelines, which integrates model training and data preprocessing to streamline workflows; and VESSL Cluster, which optimizes GPU resource usage in a cluster environment.

Investors for the Series A round, which brings the total raised to $16.8 million, include A Ventures, Ubiquoss Investment, Mirae Asset Securities, Sirius Investment, SJ Investment Partners, Wooshin Venture Investment, and Shinhan Venture Investment. The startup has 35 staff members in South Korea and at a San Mateo office in the U.S.