Physical AI Simulation Startup Antioch Raises $8.5M Seed To Close The Sim-To-Real Gap

Physical AI Simulation Startup Antioch Raises $8.5M Seed To Close The Sim-To-Real Gap

Physical AI Simulation Startup Antioch Raises $8.5M Seed To Close The Sim-To-Real Gap

One of the most transformative long-term promises of physical AI is that engineers will one day be able to program and build physical robotic agents just as flexibly and easily as they build digital software tools.

That vision remains far from reality today. Robotics development is still held back by a critical shortage of high-quality data captured from real physical environments. To train their systems, companies currently have to sink resources into building full-scale mock test facilities like fake warehouses, and an entirely new industry has emerged around monitoring factory floors and gig worker movements to generate the labeled data needed to train deep learning models for robot operation.

Simulation has emerged as a promising workaround for this costly, slow process. Hyper-detailed virtual replicas of real-world spaces can provide the unlimited training data and test workspaces roboticists need to scale their work far more efficiently.

Antioch, a startup building custom simulation tools for robot developers, is focused on solving what the industry calls the sim-to-real gap — the persistent challenge of making virtual environments accurate enough that robots trained entirely in simulation can operate reliably when deployed in the physical world.

“Our core question is: how do we shrink this gap as much as possible, so that simulation feels identical to the real world from an autonomous system’s perspective?” said Antioch co-founder Harry Mellsop.

To advance that mission, the company announced to TechCrunch this week it has closed an $8.5 million seed funding round that values the startup at $60 million. The round was co-led by venture firms A* and Category Ventures, with additional participation from MaC Venture Capital, Abstract, Box Group, and Icehouse Ventures.

Mellsop launched the New York-based company alongside four co-founders in May of last year. Two of his co-founders, Alex Langshur and Michael Calvey, previously built Transpose, a security and intelligence startup, alongside Mellsop before selling the company to blockchain analytics firm Chainalysis for an undisclosed sum. The remaining two co-founders bring deep AI and robotics experience: Collin Schlager previously worked at Meta Reality Labs, while Colton Swingle came from Google DeepMind.


Connect with your next investor or high-growth portfolio startup at TechCrunch Disrupt

Land your next funding round, hire your next key team member, or unlock your game-changing opportunity. Find it all at TechCrunch Disrupt 2026, where more than 10,000 founders, investors, and tech leaders will gather for three days of over 250 hands-on tactical sessions, valuable networking introductions, and market-shaping innovation. Register today to save up to $410 on your pass.

Connect with your next investor or high-growth portfolio startup at TechCrunch Disrupt

Land your next funding round, hire your next key team member, or unlock your game-changing opportunity. Find it all at TechCrunch Disrupt 2026, where more than 10,000 founders, investors, and tech leaders will gather for three days of over 250 hands-on tactical sessions, valuable networking introductions, and market-shaping innovation. Register today to save up to $410 on your pass.


The demand for higher-quality simulation is a core priority for nearly every major player working on autonomous systems. In the self-driving car space, for example, Waymo already leverages Google DeepMind’s world model to test and validate its own driving algorithms. In theory, this approach cuts down on the amount of on-the-ground data collection needed to deploy Waymo vehicles in new regions, addressing one of the biggest cost barriers to scaling autonomous vehicle technology.

Building simulation tools for general purpose robots requires a very different skill set than building simulation for self-driving cars, and Antioch aims to fill this gap for younger companies that don’t have the capital to build this infrastructure in-house. These smaller firms also rarely have the budget to construct custom physical test facilities or log millions of miles of on-road testing with sensor-laden vehicles.

“Right now, the vast majority of the robotics industry doesn’t use any simulation at all, and we’re just now starting to clearly see that we need to speed up development timelines dramatically,” Mellsop said.

Antioch’s leadership compares their product to Cursor, the popular AI-powered software development tool. The Antioch platform lets robot developers spin up multiple digital instances of their hardware designs, and connects these instances to simulated sensors that output the exact same data the robot’s software would collect in a real physical environment. These simulated spaces let developers test rare edge cases, run reinforcement learning training, and generate unlimited new training data for their models.

All of this only works if the simulation is sufficiently high-fidelity. The core technical challenge is ensuring simulated physics match real-world physics exactly, so no unexpected failures occur when the trained model is deployed to a physical robot. Antioch builds its tools on top of base models from leading developers including Nvidia and World Labs, then adds custom domain-specific libraries to make these tools accessible for robot developers. Company leaders note that working with a broad range of customers gives Antioch more contextual insight to refine its simulations than any single in-house team at a physical AI company could build on its own.

“The same transformation that swept software engineering and large language models is just getting started in physical AI,” said Çağla Kaymaz, a partner at Category Ventures, in an interview with TechCrunch. “We invest heavily in developer tools, it’s one of our favorite verticals, but the challenges here are completely different. In software, even if you have low-quality tools, risk is generally contained to the digital world. In the physical world, the stakes are exponentially higher.”

Antioch currently focuses primarily on sensor and perception systems, which makes up the bulk of demand across autonomous cars and trucks, agricultural and construction machinery, and aerial drones. The long-term goal of physical AI to power general-purpose robots that replicate human work is still years away. While Antioch primarily markets its tools to early-stage startups, some of its earliest partnerships are with large multinational corporations already investing heavily in robotics.

Veteran industry executive Adrian Macneil, an angel investor in Antioch, has deep first-hand experience in this space. As a former executive at self-driving startup Cruise, he built the company’s core data infrastructure, before founding Foxglove in 2021, a startup that provides pre-built data pipelines for physical AI developers.

“Simulation is absolutely critical when you’re building a safety case for an autonomous system or working on very high-accuracy tasks,” Macneil said Wednesday at the Ride AI conference in San Francisco. “It’s simply impossible to log enough real-world miles to cover every possible scenario.”

Macneil says he wants to see the same kind of commoditized off-the-shelf tools that powered the SaaS revolution — platforms like GitHub, Stripe, and Twilio — emerge to support the physical AI ecosystem. “We need far more of the full development toolchain to be available off the shelf for developers,” he told TechCrunch.

“Our whole team genuinely believes that any developer building an autonomous system for the real world will do the vast majority of their work in software within the next two to three years,” Mellsop said. “This is the first time we’ve been able to let autonomous agents iterate on physical autonomy systems, and actually close the development feedback loop quickly.”

Early experiments already showcase this potential. David Mayo, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory, is already using Antioch’s platform to evaluate LLMs for robotics. In one experiment, Mayo had AI models design custom robots from scratch, then used Antioch’s simulator to test their performance. The platform even lets users pit different AI-designed robots against each other in simulated competitions, such as matches where bots try to push rivals off a platform. Giving LLMs access to this kind of realistic sandbox could create an entirely new paradigm for benchmarking LLM performance for physical AI use cases.

Even so, there is still significant work ahead to fully close the gap between digital simulation and real-world physical performance. If that gap can be closed, developers will be able to build the self-reinforcing data flywheel that Macneil identifies as the key to success for category leaders like Waymo, where engineering teams can consistently count on each new month’s model being more capable than the last. For other companies looking to replicate that progress, the choice will be simple: build the simulation infrastructure themselves, or buy it from specialists like Antioch.

Related Article