I posted this project last week and received some valuable feedback. I’ve made a lot of changes since then and wanted to share it again to see what you think.
The idea came from while I was working on a personal project. When I tried to deploy my agent into the cloud, I ran into a lot of headaches — setting up VMs, writing config, handling crashes. I decided to build a solution for it and called it Agentainer.
Agentainer’s goal is to let anyone (even coding agents) deploy LLM agents into production without spending hours setting up infrastructure.
Here’s what Agentainer does:
One-click deployment: Deploy your containerized LLM agent (any language) as a Docker image
Lifecycle management: Start, stop, pause, resume, and auto-recover via UI or API
Auto-recovery: Agents restart automatically after a crash and return to their last working state
State persistence: Uses Redis for in-memory state and PostgreSQL for snapshots
Most cloud platforms are designed for stateless apps or short-lived functions. They’re not ideal for long-running autonomous agents. Since a lot of dev work is now being done by coding agents themselves, Agentainer exposes all platform functions through an API. That means even non-technical founders can ship their own agents into production without needing to manage infrastructure.
If you visit the site, you’ll find a link to our GitHub repo with a working demo that includes all the features above. You can also sign up for early access to the production version, which is launching soon.
We’re applying to YC and would love to hear feedback — especially from folks running agents in production or building with them now. If you try Agentainer Lab, I’d really appreciate any thoughts or feature suggestions.
Note: Agentainer doesn’t provide any LLM models or reasoning frameworks. We’re infrastructure only — you bring the agent, and we handle deployment, state, and APIs.
Hello HN,
I posted this project last week and received some valuable feedback. I’ve made a lot of changes since then and wanted to share it again to see what you think.
The idea came from while I was working on a personal project. When I tried to deploy my agent into the cloud, I ran into a lot of headaches — setting up VMs, writing config, handling crashes. I decided to build a solution for it and called it Agentainer.
Agentainer’s goal is to let anyone (even coding agents) deploy LLM agents into production without spending hours setting up infrastructure.
Here’s what Agentainer does:
One-click deployment: Deploy your containerized LLM agent (any language) as a Docker image
Lifecycle management: Start, stop, pause, resume, and auto-recover via UI or API
Auto-recovery: Agents restart automatically after a crash and return to their last working state
State persistence: Uses Redis for in-memory state and PostgreSQL for snapshots
Per-agent secure APIs: Each agent gets its own REST/gRPC endpoint with token-based auth and usage logging (e.g. https://agentainer.io/{agentId}/{agentEndpoint})
Most cloud platforms are designed for stateless apps or short-lived functions. They’re not ideal for long-running autonomous agents. Since a lot of dev work is now being done by coding agents themselves, Agentainer exposes all platform functions through an API. That means even non-technical founders can ship their own agents into production without needing to manage infrastructure.
If you visit the site, you’ll find a link to our GitHub repo with a working demo that includes all the features above. You can also sign up for early access to the production version, which is launching soon.
We’re applying to YC and would love to hear feedback — especially from folks running agents in production or building with them now. If you try Agentainer Lab, I’d really appreciate any thoughts or feature suggestions.
Note: Agentainer doesn’t provide any LLM models or reasoning frameworks. We’re infrastructure only — you bring the agent, and we handle deployment, state, and APIs.