NVIDIA provides free access to 135 AI models via its NIM API platform. This allows developers and investment firms to experiment with top models like DeepSeek and GLM for tasks like document analysis and due diligence without cost barriers.
For anyone building technology into their investment operations — and at this point, that should be everyone — there's a piece of infrastructure that most people have missed entirely.
NVIDIA, the company that built the hardware backbone of the AI revolution, has quietly made 135 hosted AI models available to developers via a free API tier. Not a trial. Not usage-capped with a card on file. Free. Get an API key at build.nvidia.com and you're in.
The platform is NVIDIA NIM — Neural Inference Microservices. The model catalogue includes some of the most capable models currently available:
- MiniMax M2.7 — multimodal, exceptional reasoning
- GLM 5.1 — strong on structured data extraction
- Kimi K2.5 — 128k context window, long-document specialist
- DeepSeek V3.2 — arguably the most capable open-source reasoning model available today
- GPT-OSS-120B — Meta's flagship open weights model
- Sarvam-M — multilingual specialist
The base URL is integrate.api.nvidia.com/v1. It's OpenAI-compatible — which means it plugs directly into every serious development environment without modification.
Why this matters for alternative asset managers
AI is no longer optional infrastructure for investment operations. Funds and family offices are already deploying AI for deal flow screening, document summarisation, LP reporting, market monitoring, and due diligence synthesis. The question isn't whether to build these capabilities — it's how cheaply and how well.
Free inference changes the economics of experimentation. Testing DeepSeek V3.2 for contract review, or GLM for extracting structured data from pitch decks, costs nothing. You run it, evaluate it, and decide whether it earns a place in your stack. That's how serious operators build durable AI infrastructure — not by signing enterprise contracts before they've proven the use case.
Aggregators like OpenRouter are already routing most of these models — with a margin baked in. Direct NVIDIA NIM access removes that layer entirely.
The practical setup
- Generate your API key at build.nvidia.com/models
- Set base URL to integrate.api.nvidia.com/v1
- Select a model ID — examples: minimaxai/minimax-m2.7, deepseek/deepseek-v3-2, moonshotai/kimi-k2.5
- Plug into your tool of choice — Cursor, Claude Code, any OpenAI-compatible application
The full model catalogue is at build.nvidia.com. As of now, 135 models are live and available. Most builders — and almost every asset manager — have no idea this exists.
That information asymmetry won't last long.
Alexander Knight is the founder of Whisky Cask Club and writes on alternative assets and investment technology.