Run Claude Code with local MLX-powered models. Operations teams benefit from on-premises AI coding assistance. Connects to local MLX models via Python server, replacing cloud-based Anthropic services.
git clone https://github.com/chand1012/claude-code-mlx-proxy.gitRun Claude Code with local MLX-powered models. Operations teams benefit from on-premises AI coding assistance. Connects to local MLX models via Python server, replacing cloud-based Anthropic services.
No install command available. Check the GitHub repository for manual installation instructions.
git clone https://github.com/chand1012/claude-code-mlx-proxyCopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
I'm using Claude Code MLX Proxy to assist with [PROGRAMMING_TASK] in [PROGRAMMING_LANGUAGE]. I have [DESCRIBE_LOCAL_MLX_SETUP] available. Please provide code suggestions or explanations while considering my local MLX models.
# Code Review and Optimization for Local MLX Model
## Current Implementation
```python
# The following code processes image data using MLX
import mlx.core as mx
import mlx.nn as nn
class ImageProcessor(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1)
self.conv2 = nn.Conv2d(16, 32, kernel_size=3, stride=1, padding=1)
def __call__(self, x):
x = mx.relu(self.conv1(x))
x = mx.avg_pool2d(x, kernel_size=2, stride=2)
x = mx.relu(self.conv2(x))
return x
```
## Recommendations
1. **Batch Processing**: Consider implementing batch processing to improve efficiency:
```python
def __call__(self, x):
# Add batch dimension if not present
if x.ndim == 3:
x = x[None, ...]
...
```
2. **Model Architecture**: For your specific use case, consider adding a third convolutional layer:
```python
self.conv3 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1)
```
3. **Performance Optimization**: Utilize MLX's native operations for better performance:
```python
import mlx.optimizers as optim
optimizer = optim.Adam(learning_rate=0.001)
```
## Implementation Notes
- The current implementation lacks proper error handling for edge cases
- Consider adding input validation for the image dimensions
- The model architecture may benefit from skip connections for better gradient flowSmart ring for digital payments
Unlock data insights with interactive dashboards and collaborative analytics capabilities.
AI assistant built for thoughtful, nuanced conversation
IronCalc is a spreadsheet engine and ecosystem
Enterprise workflow automation and service management platform
Automate your spreadsheet tasks with AI power