🚀 ULTIMATE V1 现已发布!✦ 加入 Discord 上的数千名开发者 ✦ 全球最佳 Unreal Engine 5 副驾驶 ✦ 1050+ 原生工具 ✦ 95% UE5 核心覆盖 ✦ 永久拥有 ✦
✨ ULTIMATE V1 - AI 游戏开发的最大飞跃 ✦ 1050+ 原生工具 ✦ 无月度订阅 ✦ 本地 LLM 实现 100% 隐私 ✦ 完整源代码 ✦ 点击查看新功能 ✦
ULTIMATE V1 现已发布!

史上最大更新 - 1050+ 原生工具完整项目智能、以及 AI Unreal Insights。覆盖 95% 的 UE5 核心。零订阅、完整源代码、永久拥有。

Local LLM Setup

Local LLM Setup

V1 Documentation - Under Heavy Rework. Some sections may be incomplete. For the latest info, check the FAB listing or Discord.

Run AI completely offline on your own machine with zero internet required and complete privacy. The plugin works with any OpenAI-compatible local inference server.

Supported Servers

Ollama

  1. Download and install from ollama.com
  2. Pull a model: ollama pull llama3.1
  3. Ollama starts automatically and serves on port 11434
  4. In plugin Settings > AI Models > API Key/Local LLM:
    • Provider: Custom
    • Base URL: http://localhost:11434/v1/chat/completions
    • Model Name: llama3.1 (or whatever you pulled)
    • API Key: leave empty

LM Studio

  1. Download from lmstudio.ai
  2. Load a model from the LM Studio UI
  3. Start the local server (default port 1234)
  4. In plugin Settings:
    • Provider: Custom
    • Base URL: http://localhost:1234/v1/chat/completions
    • Model Name: the model you loaded (check LM Studio’s server tab)
    • API Key: leave empty

Lemonade (AMD)

  1. Download from lemonade-server.ai
  2. Load a model (optimized for AMD GPUs/NPUs)
  3. Server runs on port 13305
  4. In plugin Settings:
    • Provider: Custom
    • Base URL: http://localhost:13305/api/v1/chat/completions
    • Model Name: your loaded model name (e.g., qwen3.5-9b-FLM)
    • API Key: leave empty

Any OpenAI-Compatible Server

The plugin works with any server that implements the OpenAI /v1/chat/completions endpoint:

  • vLLM
  • text-generation-webui (with OpenAI extension)
  • LocalAI
  • Jan
  • Koboldcpp (with OpenAI API mode)

Configuration Tips

Base URL Format

The URL must point to the chat completions endpoint. Common patterns:

  • http://localhost:PORT/v1/chat/completions
  • http://localhost:PORT/api/v1/chat/completions

API Key

Leave empty for local servers that don’t require authentication. The plugin skips the Authorization header when no key is set.

Custom Request Params

Use the “Custom Request Params (JSON)” field to pass extra parameters:

{"temperature": 0.7, "max_tokens": 16384}

Model Selection

Enter the exact model name as your server reports it. For Ollama, this is the model tag (e.g., llama3.1, codellama:13b). For LM Studio, check the server tab for the loaded model identifier.

Limitations

  • Context window: Local models typically have 4K-32K token contexts. The plugin’s system prompt uses several thousand tokens, which may not leave much room for conversation on small models. Use models with 32K+ context for best results.
  • Tool calling: The AI needs a capable model to properly use tools. Small models (7B and under) may struggle with the structured JSON tool format. 13B+ recommended for tool-heavy workflows.
  • Speed: Generation speed depends entirely on your hardware. GPU acceleration is recommended.
  • Just Chat mode: If tool execution fails with your local model, try “Just Chat” mode which sends a minimal payload without tool instructions.

Privacy

When using a local LLM, zero data leaves your machine. No API calls are made to any external server. Your prompts, project context, and generated content stay entirely local.

Live