Multi-Model Visual Understanding MCP Server, GLM-4.6V, DeepSeek-OCR (free), and Qwen3-VL-Flash. Provide visual processing capabilities for AI coding models that do not support image understanding.多模型视觉理解MCP服务器,GLM-4.6V、DeepSeek-OCR(免费)和Qwen3-VL-Flash等。为不支持图片理解的 AI 编码模型提供视觉处理能力。
Multi-Model Visual Understanding MCP Server, GLM-4.6V, DeepSeek-OCR (free), and Qwen3-VL-Flash. Provide visual processing capabilities for AI coding models that do not support image understanding.多模型视觉理解MCP服务器,GLM-4.6V、DeepSeek-OCR(免费)和Qwen3-VL-Flash等。为不支持图片理解的 AI 编码模型提供视觉处理能力。
npx -y luma-mcpAdd this configuration to your claude_desktop_config.json:
{
"mcpServers": {
"jochenyang-luma-mcp-github": {
"command": "npx",
"args": [
"-y",
"npx -y luma-mcp"
]
}
}
}Restart Claude Desktop, then ask:
"What tools do you have available from luma mcp?"
API Key Required
This server requires an API key from luma mcp. Add it to your environment or config.
| Variable | Required | Description |
|---|---|---|
| LUMA_MCP_API_KEY | Yes | Your luma mcp API key |
"What resources are available in luma mcp?"
Claude will query available resources and return a list of what you can access.
"Show me details about [specific item] in luma mcp"
Claude will fetch and display detailed information about the requested item.
"Create a new [item] in luma mcp with [details]"
Claude will use the appropriate tool to create the resource and confirm success.
We build custom MCP integrations for B2B companies. From simple connections to complex multi-tool setups.