mirror of
https://github.com/samanhappy/mcphub.git
synced 2025-12-24 18:59:30 -05:00
Compare commits
5 Commits
copilot/ad
...
copilot/fi
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3a9ea9bc4b | ||
|
|
3acdd99664 | ||
|
|
4ac875860c | ||
|
|
7b9e9da7bc | ||
|
|
cd7e2a23a3 |
29
README.md
29
README.md
@@ -21,7 +21,6 @@ MCPHub makes it easy to manage and scale multiple MCP (Model Context Protocol) s
|
||||
- **Secure Authentication**: Built-in user management with role-based access powered by JWT and bcrypt.
|
||||
- **OAuth 2.0 Support**: Full OAuth support for upstream MCP servers with proxy authorization capabilities.
|
||||
- **Environment Variable Expansion**: Use environment variables anywhere in your configuration for secure credential management. See [Environment Variables Guide](docs/environment-variables.md).
|
||||
- **Cluster Deployment**: Deploy multiple nodes for high availability and load distribution with sticky session support. See [Cluster Deployment Guide](docs/cluster-deployment.md).
|
||||
- **Docker-Ready**: Deploy instantly with our containerized setup.
|
||||
|
||||
## 🔧 Quick Start
|
||||
@@ -99,6 +98,34 @@ Manual registration example:
|
||||
|
||||
For manual providers, create the OAuth App in the upstream console, set the redirect URI to `http://localhost:3000/oauth/callback` (or your deployed domain), and then plug the credentials into the dashboard or config file.
|
||||
|
||||
#### Connection Modes (Optional)
|
||||
|
||||
MCPHub supports two connection strategies:
|
||||
|
||||
- **`persistent` (default)**: Maintains long-running connections for stateful servers
|
||||
- **`on-demand`**: Connects only when needed, ideal for ephemeral servers that exit after operations
|
||||
|
||||
Example for one-time use servers:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"pdf-reader": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "pdf-mcp-server"],
|
||||
"connectionMode": "on-demand"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Use `on-demand` mode for servers that:
|
||||
- Don't support long-running connections
|
||||
- Exit automatically after handling requests
|
||||
- Experience "Connection closed" errors
|
||||
|
||||
See the [Configuration Guide](docs/configuration/mcp-settings.mdx) for more details.
|
||||
|
||||
### Docker Deployment
|
||||
|
||||
**Recommended**: Mount your custom config:
|
||||
|
||||
@@ -19,9 +19,6 @@ MCPHub 通过将多个 MCP(Model Context Protocol)服务器组织为灵活
|
||||
- **热插拔式配置**:在运行时动态添加、移除或更新服务器配置,无需停机。
|
||||
- **基于分组的访问控制**:自定义分组并管理服务器访问权限。
|
||||
- **安全认证机制**:内置用户管理,基于 JWT 和 bcrypt,实现角色权限控制。
|
||||
- **OAuth 2.0 支持**:完整的 OAuth 支持,用于上游 MCP 服务器的代理授权功能。
|
||||
- **环境变量扩展**:在配置中的任何位置使用环境变量,实现安全凭证管理。参见[环境变量指南](docs/environment-variables.md)。
|
||||
- **集群部署**:部署多个节点实现高可用性和负载分配,支持会话粘性。参见[集群部署指南](docs/cluster-deployment.zh.md)。
|
||||
- **Docker 就绪**:提供容器化镜像,快速部署。
|
||||
|
||||
## 🔧 快速开始
|
||||
|
||||
@@ -1,516 +0,0 @@
|
||||
# Cluster Deployment Guide
|
||||
|
||||
MCPHub supports cluster deployment, allowing you to run multiple nodes that work together as a unified system. This enables:
|
||||
|
||||
- **High Availability**: Distribute MCP servers across multiple nodes for redundancy
|
||||
- **Load Distribution**: Balance requests across multiple replicas of the same MCP server
|
||||
- **Sticky Sessions**: Ensure client sessions are routed to the same node consistently
|
||||
- **Centralized Management**: One coordinator manages the entire cluster
|
||||
|
||||
## Architecture
|
||||
|
||||
MCPHub cluster has three operating modes:
|
||||
|
||||
1. **Standalone Mode** (Default): Single node operation, no cluster features
|
||||
2. **Coordinator Mode**: Central node that manages the cluster, routes requests, and maintains session affinity
|
||||
3. **Node Mode**: Worker nodes that register with the coordinator and run MCP servers
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Coordinator Node │
|
||||
│ - Manages cluster state │
|
||||
│ - Routes client requests │
|
||||
│ - Maintains session affinity │
|
||||
│ - Health monitoring │
|
||||
└───────────┬─────────────────────────────┘
|
||||
│
|
||||
┌───────┴───────────────────┐
|
||||
│ │
|
||||
┌───▼────────┐ ┌────────▼────┐
|
||||
│ Node 1 │ │ Node 2 │
|
||||
│ - MCP A │ │ - MCP A │
|
||||
│ - MCP B │ │ - MCP C │
|
||||
└────────────┘ └─────────────┘
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Coordinator Configuration
|
||||
|
||||
Create or update `mcp_settings.json` on the coordinator node:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
// Optional: coordinator can also run MCP servers
|
||||
"example": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "example-mcp-server"]
|
||||
}
|
||||
},
|
||||
"systemConfig": {
|
||||
"cluster": {
|
||||
"enabled": true,
|
||||
"mode": "coordinator",
|
||||
"coordinator": {
|
||||
"nodeTimeout": 15000,
|
||||
"cleanupInterval": 30000,
|
||||
"stickySessionTimeout": 3600000
|
||||
},
|
||||
"stickySession": {
|
||||
"enabled": true,
|
||||
"strategy": "consistent-hash",
|
||||
"cookieName": "MCPHUB_NODE",
|
||||
"headerName": "X-MCPHub-Node"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Configuration Options:**
|
||||
|
||||
- `nodeTimeout`: Time (ms) before marking a node as unhealthy (default: 15000)
|
||||
- `cleanupInterval`: Interval (ms) for cleaning up inactive nodes (default: 30000)
|
||||
- `stickySessionTimeout`: Session affinity timeout (ms) (default: 3600000 - 1 hour)
|
||||
- `stickySession.enabled`: Enable sticky session routing (default: true)
|
||||
- `stickySession.strategy`: Session affinity strategy:
|
||||
- `consistent-hash`: Hash-based routing (default)
|
||||
- `cookie`: Cookie-based routing
|
||||
- `header`: Header-based routing
|
||||
|
||||
### Node Configuration
|
||||
|
||||
Create or update `mcp_settings.json` on each worker node:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"amap": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@amap/amap-maps-mcp-server"]
|
||||
},
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["@playwright/mcp@latest", "--headless"]
|
||||
}
|
||||
},
|
||||
"systemConfig": {
|
||||
"cluster": {
|
||||
"enabled": true,
|
||||
"mode": "node",
|
||||
"node": {
|
||||
"id": "node-1",
|
||||
"name": "Worker Node 1",
|
||||
"coordinatorUrl": "http://coordinator:3000",
|
||||
"heartbeatInterval": 5000,
|
||||
"registerOnStartup": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Configuration Options:**
|
||||
|
||||
- `node.id`: Unique node identifier (auto-generated if not provided)
|
||||
- `node.name`: Human-readable node name (defaults to hostname)
|
||||
- `node.coordinatorUrl`: URL of the coordinator node (required)
|
||||
- `node.heartbeatInterval`: Heartbeat interval (ms) (default: 5000)
|
||||
- `node.registerOnStartup`: Auto-register on startup (default: true)
|
||||
|
||||
## Deployment Scenarios
|
||||
|
||||
### Scenario 1: Docker Compose
|
||||
|
||||
Create a `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
coordinator:
|
||||
image: samanhappy/mcphub:latest
|
||||
ports:
|
||||
- "3000:3000"
|
||||
volumes:
|
||||
- ./coordinator-config.json:/app/mcp_settings.json
|
||||
- coordinator-data:/app/data
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
|
||||
node1:
|
||||
image: samanhappy/mcphub:latest
|
||||
volumes:
|
||||
- ./node1-config.json:/app/mcp_settings.json
|
||||
- node1-data:/app/data
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
depends_on:
|
||||
- coordinator
|
||||
|
||||
node2:
|
||||
image: samanhappy/mcphub:latest
|
||||
volumes:
|
||||
- ./node2-config.json:/app/mcp_settings.json
|
||||
- node2-data:/app/data
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
depends_on:
|
||||
- coordinator
|
||||
|
||||
volumes:
|
||||
coordinator-data:
|
||||
node1-data:
|
||||
node2-data:
|
||||
```
|
||||
|
||||
Start the cluster:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Scenario 2: Kubernetes
|
||||
|
||||
Create Kubernetes manifests:
|
||||
|
||||
**Coordinator Deployment:**
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mcphub-coordinator
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: mcphub-coordinator
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mcphub-coordinator
|
||||
spec:
|
||||
containers:
|
||||
- name: mcphub
|
||||
image: samanhappy/mcphub:latest
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /app/mcp_settings.json
|
||||
subPath: mcp_settings.json
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: mcphub-coordinator-config
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mcphub-coordinator
|
||||
spec:
|
||||
selector:
|
||||
app: mcphub-coordinator
|
||||
ports:
|
||||
- port: 3000
|
||||
targetPort: 3000
|
||||
type: LoadBalancer
|
||||
```
|
||||
|
||||
**Worker Node Deployment:**
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mcphub-node
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: mcphub-node
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mcphub-node
|
||||
spec:
|
||||
containers:
|
||||
- name: mcphub
|
||||
image: samanhappy/mcphub:latest
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /app/mcp_settings.json
|
||||
subPath: mcp_settings.json
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: mcphub-node-config
|
||||
```
|
||||
|
||||
Apply the manifests:
|
||||
|
||||
```bash
|
||||
kubectl apply -f coordinator.yaml
|
||||
kubectl apply -f nodes.yaml
|
||||
```
|
||||
|
||||
### Scenario 3: Manual Deployment
|
||||
|
||||
**On Coordinator (192.168.1.100):**
|
||||
|
||||
```bash
|
||||
# Install MCPHub
|
||||
npm install -g @samanhappy/mcphub
|
||||
|
||||
# Configure as coordinator
|
||||
cat > mcp_settings.json <<EOF
|
||||
{
|
||||
"systemConfig": {
|
||||
"cluster": {
|
||||
"enabled": true,
|
||||
"mode": "coordinator"
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Start coordinator
|
||||
PORT=3000 mcphub
|
||||
```
|
||||
|
||||
**On Node 1 (192.168.1.101):**
|
||||
|
||||
```bash
|
||||
# Install MCPHub
|
||||
npm install -g @samanhappy/mcphub
|
||||
|
||||
# Configure as node
|
||||
cat > mcp_settings.json <<EOF
|
||||
{
|
||||
"mcpServers": {
|
||||
"server1": { "command": "..." }
|
||||
},
|
||||
"systemConfig": {
|
||||
"cluster": {
|
||||
"enabled": true,
|
||||
"mode": "node",
|
||||
"node": {
|
||||
"coordinatorUrl": "http://192.168.1.100:3000"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Start node
|
||||
PORT=3001 mcphub
|
||||
```
|
||||
|
||||
**On Node 2 (192.168.1.102):**
|
||||
|
||||
```bash
|
||||
# Similar to Node 1, but with PORT=3002
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Accessing the Cluster
|
||||
|
||||
Once the cluster is running, connect AI clients to the coordinator's endpoint:
|
||||
|
||||
```
|
||||
http://coordinator:3000/mcp
|
||||
http://coordinator:3000/sse
|
||||
```
|
||||
|
||||
The coordinator will:
|
||||
1. Route requests to appropriate nodes based on session affinity
|
||||
2. Load balance across multiple replicas of the same server
|
||||
3. Automatically failover to healthy nodes
|
||||
|
||||
### Sticky Sessions
|
||||
|
||||
Sticky sessions ensure that a client's requests are routed to the same node throughout their session. This is important for:
|
||||
|
||||
- Maintaining conversation context
|
||||
- Preserving temporary state
|
||||
- Consistent tool execution
|
||||
|
||||
The default strategy is **consistent-hash**, which uses the session ID to determine the target node. Alternative strategies:
|
||||
|
||||
- **Cookie-based**: Uses `MCPHUB_NODE` cookie
|
||||
- **Header-based**: Uses `X-MCPHub-Node` header
|
||||
|
||||
### Multiple Replicas
|
||||
|
||||
You can deploy the same MCP server on multiple nodes for:
|
||||
|
||||
- **Load balancing**: Distribute requests across replicas
|
||||
- **High availability**: Failover if one node goes down
|
||||
|
||||
Example configuration:
|
||||
|
||||
**Node 1:**
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["@playwright/mcp@latest"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Node 2:**
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["@playwright/mcp@latest"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The coordinator will automatically load balance requests to `playwright` across both nodes.
|
||||
|
||||
## Management API
|
||||
|
||||
The coordinator exposes cluster management endpoints:
|
||||
|
||||
### Get Cluster Status
|
||||
|
||||
```bash
|
||||
curl http://coordinator:3000/api/cluster/status
|
||||
```
|
||||
|
||||
Response:
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"data": {
|
||||
"enabled": true,
|
||||
"mode": "coordinator",
|
||||
"nodeId": "coordinator",
|
||||
"stats": {
|
||||
"nodes": 3,
|
||||
"activeNodes": 3,
|
||||
"servers": 5,
|
||||
"sessions": 10
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Get All Nodes
|
||||
|
||||
```bash
|
||||
curl http://coordinator:3000/api/cluster/nodes
|
||||
```
|
||||
|
||||
### Get Server Replicas
|
||||
|
||||
```bash
|
||||
curl http://coordinator:3000/api/cluster/servers/playwright/replicas
|
||||
```
|
||||
|
||||
### Get Session Affinity
|
||||
|
||||
```bash
|
||||
curl http://coordinator:3000/api/cluster/sessions/{sessionId}
|
||||
```
|
||||
|
||||
## Monitoring and Troubleshooting
|
||||
|
||||
### Check Node Health
|
||||
|
||||
Monitor coordinator logs for heartbeat messages:
|
||||
|
||||
```
|
||||
Node registered: Worker Node 1 (node-1) with 2 servers
|
||||
```
|
||||
|
||||
If a node becomes unhealthy:
|
||||
|
||||
```
|
||||
Marking node node-1 as unhealthy (last heartbeat: 2024-01-01T10:00:00.000Z)
|
||||
```
|
||||
|
||||
### Verify Registration
|
||||
|
||||
Check if nodes are registered:
|
||||
|
||||
```bash
|
||||
curl http://coordinator:3000/api/cluster/nodes?active=true
|
||||
```
|
||||
|
||||
### Session Affinity Issues
|
||||
|
||||
If sessions aren't sticking to the same node:
|
||||
|
||||
1. Verify sticky sessions are enabled in coordinator config
|
||||
2. Check that session IDs are being passed correctly
|
||||
3. Review coordinator logs for session affinity errors
|
||||
|
||||
### Network Connectivity
|
||||
|
||||
Ensure worker nodes can reach the coordinator:
|
||||
|
||||
```bash
|
||||
# From worker node
|
||||
curl http://coordinator:3000/health
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Coordinator Load
|
||||
|
||||
The coordinator handles:
|
||||
- Request routing
|
||||
- Node heartbeats
|
||||
- Session tracking
|
||||
|
||||
For very large clusters (>50 nodes), consider:
|
||||
- Increasing coordinator resources
|
||||
- Tuning heartbeat intervals
|
||||
- Using header-based sticky sessions (lower overhead)
|
||||
|
||||
### Network Latency
|
||||
|
||||
Minimize latency between coordinator and nodes:
|
||||
- Deploy in the same datacenter/region
|
||||
- Use low-latency networking
|
||||
- Consider coordinator placement near clients
|
||||
|
||||
### Session Timeout
|
||||
|
||||
Balance session timeout with resource usage:
|
||||
- Shorter timeout: Less memory, more re-routing
|
||||
- Longer timeout: Better stickiness, more memory
|
||||
|
||||
Default is 1 hour, adjust based on your use case.
|
||||
|
||||
## Limitations
|
||||
|
||||
1. **Stateful Sessions**: Node-local state is lost if a node fails. Use external storage for persistent state.
|
||||
2. **Single Coordinator**: Currently supports one coordinator. Consider load balancing at the infrastructure level.
|
||||
3. **Network Partitions**: Nodes that lose connection to coordinator will be marked unhealthy.
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use Groups**: Organize MCP servers into groups for easier management
|
||||
2. **Monitor Health**: Set up alerts for unhealthy nodes
|
||||
3. **Version Consistency**: Run the same MCPHub version across all nodes
|
||||
4. **Resource Planning**: Allocate appropriate resources based on MCP server requirements
|
||||
5. **Backup Configuration**: Keep coordinator config backed up
|
||||
6. **Gradual Rollout**: Test cluster configuration with a small number of nodes first
|
||||
|
||||
## See Also
|
||||
|
||||
- [Docker Deployment](../deployment/docker.md)
|
||||
- [Kubernetes Deployment](../deployment/kubernetes.md)
|
||||
- [High Availability Setup](../deployment/high-availability.md)
|
||||
@@ -1,510 +0,0 @@
|
||||
# 集群部署指南
|
||||
|
||||
MCPHub 支持集群部署,允许多个节点协同工作组成一个统一的系统。这提供了:
|
||||
|
||||
- **高可用性**:将 MCP 服务器分布在多个节点上实现冗余
|
||||
- **负载分配**:在同一 MCP 服务器的多个副本之间平衡请求
|
||||
- **会话粘性**:确保客户端会话一致性地路由到同一节点
|
||||
- **集中管理**:一个协调器管理整个集群
|
||||
|
||||
## 架构
|
||||
|
||||
MCPHub 集群有三种运行模式:
|
||||
|
||||
1. **独立模式**(默认):单节点运行,无集群功能
|
||||
2. **协调器模式**:管理集群、路由请求、维护会话亲和性的中心节点
|
||||
3. **节点模式**:向协调器注册并运行 MCP 服务器的工作节点
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ 协调器节点 │
|
||||
│ - 管理集群状态 │
|
||||
│ - 路由客户端请求 │
|
||||
│ - 维护会话亲和性 │
|
||||
│ - 健康监控 │
|
||||
└───────────┬─────────────────────────────┘
|
||||
│
|
||||
┌───────┴───────────────────┐
|
||||
│ │
|
||||
┌───▼────────┐ ┌────────▼────┐
|
||||
│ 节点 1 │ │ 节点 2 │
|
||||
│ - MCP A │ │ - MCP A │
|
||||
│ - MCP B │ │ - MCP C │
|
||||
└────────────┘ └─────────────┘
|
||||
```
|
||||
|
||||
## 配置
|
||||
|
||||
### 协调器配置
|
||||
|
||||
在协调器节点上创建或更新 `mcp_settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
// 可选:协调器也可以运行 MCP 服务器
|
||||
"example": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "example-mcp-server"]
|
||||
}
|
||||
},
|
||||
"systemConfig": {
|
||||
"cluster": {
|
||||
"enabled": true,
|
||||
"mode": "coordinator",
|
||||
"coordinator": {
|
||||
"nodeTimeout": 15000,
|
||||
"cleanupInterval": 30000,
|
||||
"stickySessionTimeout": 3600000
|
||||
},
|
||||
"stickySession": {
|
||||
"enabled": true,
|
||||
"strategy": "consistent-hash",
|
||||
"cookieName": "MCPHUB_NODE",
|
||||
"headerName": "X-MCPHub-Node"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**配置选项:**
|
||||
|
||||
- `nodeTimeout`: 将节点标记为不健康之前的时间(毫秒)(默认:15000)
|
||||
- `cleanupInterval`: 清理不活跃节点的间隔(毫秒)(默认:30000)
|
||||
- `stickySessionTimeout`: 会话亲和性超时(毫秒)(默认:3600000 - 1小时)
|
||||
- `stickySession.enabled`: 启用会话粘性路由(默认:true)
|
||||
- `stickySession.strategy`: 会话亲和性策略:
|
||||
- `consistent-hash`: 基于哈希的路由(默认)
|
||||
- `cookie`: 基于 Cookie 的路由
|
||||
- `header`: 基于请求头的路由
|
||||
|
||||
### 节点配置
|
||||
|
||||
在每个工作节点上创建或更新 `mcp_settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"amap": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@amap/amap-maps-mcp-server"]
|
||||
},
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["@playwright/mcp@latest", "--headless"]
|
||||
}
|
||||
},
|
||||
"systemConfig": {
|
||||
"cluster": {
|
||||
"enabled": true,
|
||||
"mode": "node",
|
||||
"node": {
|
||||
"id": "node-1",
|
||||
"name": "工作节点 1",
|
||||
"coordinatorUrl": "http://coordinator:3000",
|
||||
"heartbeatInterval": 5000,
|
||||
"registerOnStartup": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**配置选项:**
|
||||
|
||||
- `node.id`: 唯一节点标识符(如未提供则自动生成)
|
||||
- `node.name`: 人类可读的节点名称(默认为主机名)
|
||||
- `node.coordinatorUrl`: 协调器节点的 URL(必需)
|
||||
- `node.heartbeatInterval`: 心跳间隔(毫秒)(默认:5000)
|
||||
- `node.registerOnStartup`: 启动时自动注册(默认:true)
|
||||
|
||||
## 部署场景
|
||||
|
||||
### 场景 1:Docker Compose
|
||||
|
||||
创建 `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
coordinator:
|
||||
image: samanhappy/mcphub:latest
|
||||
ports:
|
||||
- "3000:3000"
|
||||
volumes:
|
||||
- ./coordinator-config.json:/app/mcp_settings.json
|
||||
- coordinator-data:/app/data
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
|
||||
node1:
|
||||
image: samanhappy/mcphub:latest
|
||||
volumes:
|
||||
- ./node1-config.json:/app/mcp_settings.json
|
||||
- node1-data:/app/data
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
depends_on:
|
||||
- coordinator
|
||||
|
||||
node2:
|
||||
image: samanhappy/mcphub:latest
|
||||
volumes:
|
||||
- ./node2-config.json:/app/mcp_settings.json
|
||||
- node2-data:/app/data
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
depends_on:
|
||||
- coordinator
|
||||
|
||||
volumes:
|
||||
coordinator-data:
|
||||
node1-data:
|
||||
node2-data:
|
||||
```
|
||||
|
||||
启动集群:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### 场景 2:Kubernetes
|
||||
|
||||
创建 Kubernetes 清单:
|
||||
|
||||
**协调器部署:**
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mcphub-coordinator
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: mcphub-coordinator
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mcphub-coordinator
|
||||
spec:
|
||||
containers:
|
||||
- name: mcphub
|
||||
image: samanhappy/mcphub:latest
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /app/mcp_settings.json
|
||||
subPath: mcp_settings.json
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: mcphub-coordinator-config
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mcphub-coordinator
|
||||
spec:
|
||||
selector:
|
||||
app: mcphub-coordinator
|
||||
ports:
|
||||
- port: 3000
|
||||
targetPort: 3000
|
||||
type: LoadBalancer
|
||||
```
|
||||
|
||||
**工作节点部署:**
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mcphub-node
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: mcphub-node
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mcphub-node
|
||||
spec:
|
||||
containers:
|
||||
- name: mcphub
|
||||
image: samanhappy/mcphub:latest
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /app/mcp_settings.json
|
||||
subPath: mcp_settings.json
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: mcphub-node-config
|
||||
```
|
||||
|
||||
应用清单:
|
||||
|
||||
```bash
|
||||
kubectl apply -f coordinator.yaml
|
||||
kubectl apply -f nodes.yaml
|
||||
```
|
||||
|
||||
### 场景 3:手动部署
|
||||
|
||||
**在协调器上(192.168.1.100):**
|
||||
|
||||
```bash
|
||||
# 安装 MCPHub
|
||||
npm install -g @samanhappy/mcphub
|
||||
|
||||
# 配置为协调器
|
||||
cat > mcp_settings.json <<EOF
|
||||
{
|
||||
"systemConfig": {
|
||||
"cluster": {
|
||||
"enabled": true,
|
||||
"mode": "coordinator"
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# 启动协调器
|
||||
PORT=3000 mcphub
|
||||
```
|
||||
|
||||
**在节点 1 上(192.168.1.101):**
|
||||
|
||||
```bash
|
||||
# 安装 MCPHub
|
||||
npm install -g @samanhappy/mcphub
|
||||
|
||||
# 配置为节点
|
||||
cat > mcp_settings.json <<EOF
|
||||
{
|
||||
"mcpServers": {
|
||||
"server1": { "command": "..." }
|
||||
},
|
||||
"systemConfig": {
|
||||
"cluster": {
|
||||
"enabled": true,
|
||||
"mode": "node",
|
||||
"node": {
|
||||
"coordinatorUrl": "http://192.168.1.100:3000"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# 启动节点
|
||||
PORT=3001 mcphub
|
||||
```
|
||||
|
||||
## 使用方法
|
||||
|
||||
### 访问集群
|
||||
|
||||
集群运行后,将 AI 客户端连接到协调器的端点:
|
||||
|
||||
```
|
||||
http://coordinator:3000/mcp
|
||||
http://coordinator:3000/sse
|
||||
```
|
||||
|
||||
协调器将:
|
||||
1. 根据会话亲和性将请求路由到适当的节点
|
||||
2. 在同一服务器的多个副本之间进行负载均衡
|
||||
3. 自动故障转移到健康的节点
|
||||
|
||||
### 会话粘性
|
||||
|
||||
会话粘性确保客户端的请求在整个会话期间路由到同一节点。这对于以下场景很重要:
|
||||
|
||||
- 维护对话上下文
|
||||
- 保持临时状态
|
||||
- 一致的工具执行
|
||||
|
||||
默认策略是 **consistent-hash**,使用会话 ID 来确定目标节点。替代策略:
|
||||
|
||||
- **Cookie-based**: 使用 `MCPHUB_NODE` cookie
|
||||
- **Header-based**: 使用 `X-MCPHub-Node` 请求头
|
||||
|
||||
### 多副本
|
||||
|
||||
您可以在多个节点上部署相同的 MCP 服务器以实现:
|
||||
|
||||
- **负载均衡**:在副本之间分配请求
|
||||
- **高可用性**:如果一个节点宕机则故障转移
|
||||
|
||||
配置示例:
|
||||
|
||||
**节点 1:**
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["@playwright/mcp@latest"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**节点 2:**
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["@playwright/mcp@latest"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
协调器将自动在两个节点之间对 `playwright` 的请求进行负载均衡。
|
||||
|
||||
## 管理 API
|
||||
|
||||
协调器公开集群管理端点:
|
||||
|
||||
### 获取集群状态
|
||||
|
||||
```bash
|
||||
curl http://coordinator:3000/api/cluster/status
|
||||
```
|
||||
|
||||
响应:
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"data": {
|
||||
"enabled": true,
|
||||
"mode": "coordinator",
|
||||
"nodeId": "coordinator",
|
||||
"stats": {
|
||||
"nodes": 3,
|
||||
"activeNodes": 3,
|
||||
"servers": 5,
|
||||
"sessions": 10
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 获取所有节点
|
||||
|
||||
```bash
|
||||
curl http://coordinator:3000/api/cluster/nodes
|
||||
```
|
||||
|
||||
### 获取服务器副本
|
||||
|
||||
```bash
|
||||
curl http://coordinator:3000/api/cluster/servers/playwright/replicas
|
||||
```
|
||||
|
||||
### 获取会话亲和性
|
||||
|
||||
```bash
|
||||
curl http://coordinator:3000/api/cluster/sessions/{sessionId}
|
||||
```
|
||||
|
||||
## 监控和故障排除
|
||||
|
||||
### 检查节点健康
|
||||
|
||||
监控协调器日志以查看心跳消息:
|
||||
|
||||
```
|
||||
Node registered: Worker Node 1 (node-1) with 2 servers
|
||||
```
|
||||
|
||||
如果节点变得不健康:
|
||||
|
||||
```
|
||||
Marking node node-1 as unhealthy (last heartbeat: 2024-01-01T10:00:00.000Z)
|
||||
```
|
||||
|
||||
### 验证注册
|
||||
|
||||
检查节点是否已注册:
|
||||
|
||||
```bash
|
||||
curl http://coordinator:3000/api/cluster/nodes?active=true
|
||||
```
|
||||
|
||||
### 会话亲和性问题
|
||||
|
||||
如果会话没有粘性到同一节点:
|
||||
|
||||
1. 验证协调器配置中是否启用了会话粘性
|
||||
2. 检查会话 ID 是否正确传递
|
||||
3. 查看协调器日志以查找会话亲和性错误
|
||||
|
||||
### 网络连接
|
||||
|
||||
确保工作节点可以访问协调器:
|
||||
|
||||
```bash
|
||||
# 从工作节点
|
||||
curl http://coordinator:3000/health
|
||||
```
|
||||
|
||||
## 性能考虑
|
||||
|
||||
### 协调器负载
|
||||
|
||||
协调器处理:
|
||||
- 请求路由
|
||||
- 节点心跳
|
||||
- 会话跟踪
|
||||
|
||||
对于非常大的集群(>50个节点),考虑:
|
||||
- 增加协调器资源
|
||||
- 调整心跳间隔
|
||||
- 使用基于请求头的会话粘性(开销更低)
|
||||
|
||||
### 网络延迟
|
||||
|
||||
最小化协调器和节点之间的延迟:
|
||||
- 在同一数据中心/地区部署
|
||||
- 使用低延迟网络
|
||||
- 考虑协调器放置在接近客户端的位置
|
||||
|
||||
### 会话超时
|
||||
|
||||
平衡会话超时与资源使用:
|
||||
- 较短超时:更少内存,更多重新路由
|
||||
- 较长超时:更好的粘性,更多内存
|
||||
|
||||
默认为 1 小时,根据您的用例进行调整。
|
||||
|
||||
## 限制
|
||||
|
||||
1. **有状态会话**:如果节点失败,节点本地状态会丢失。使用外部存储实现持久状态。
|
||||
2. **单协调器**:当前支持一个协调器。考虑在基础设施级别进行负载均衡。
|
||||
3. **网络分区**:失去与协调器连接的节点将被标记为不健康。
|
||||
|
||||
## 最佳实践
|
||||
|
||||
1. **使用分组**:将 MCP 服务器组织到分组中以便更容易管理
|
||||
2. **监控健康**:为不健康的节点设置告警
|
||||
3. **版本一致性**:在所有节点上运行相同的 MCPHub 版本
|
||||
4. **资源规划**:根据 MCP 服务器要求分配适当的资源
|
||||
5. **备份配置**:保持协调器配置的备份
|
||||
6. **逐步推出**:首先使用少量节点测试集群配置
|
||||
|
||||
## 相关文档
|
||||
|
||||
- [Docker 部署](../deployment/docker.md)
|
||||
- [Kubernetes 部署](../deployment/kubernetes.md)
|
||||
- [高可用性设置](../deployment/high-availability.md)
|
||||
@@ -72,9 +72,13 @@ MCPHub uses several configuration files:
|
||||
|
||||
### Optional Fields
|
||||
|
||||
| Field | Type | Default | Description |
|
||||
| -------------- | ------- | --------------- | --------------------------- |
|
||||
| `env` | object | `{}` | Environment variables |
|
||||
| Field | Type | Default | Description |
|
||||
| ---------------- | ------- | --------------- | --------------------------------------------------------------------- |
|
||||
| `env` | object | `{}` | Environment variables |
|
||||
| `connectionMode` | string | `"persistent"` | Connection strategy: `"persistent"` or `"on-demand"` |
|
||||
| `enabled` | boolean | `true` | Enable/disable the server |
|
||||
| `keepAliveInterval` | number | `60000` | Keep-alive ping interval for SSE connections (milliseconds) |
|
||||
| `options` | object | `{}` | MCP request options (timeout, resetTimeoutOnProgress, maxTotalTimeout)|
|
||||
|
||||
## Common MCP Server Examples
|
||||
|
||||
@@ -238,6 +242,68 @@ MCPHub uses several configuration files:
|
||||
}
|
||||
```
|
||||
|
||||
## Connection Modes
|
||||
|
||||
MCPHub supports two connection strategies for MCP servers:
|
||||
|
||||
### Persistent Connection (Default)
|
||||
|
||||
Persistent mode maintains a long-running connection to the MCP server. This is the default and recommended mode for most servers.
|
||||
|
||||
**Use cases:**
|
||||
- Servers that maintain state between requests
|
||||
- Servers with slow startup times
|
||||
- Servers designed for long-running connections
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
{
|
||||
"github": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-github"],
|
||||
"connectionMode": "persistent",
|
||||
"env": {
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### On-Demand Connection
|
||||
|
||||
On-demand mode connects only when a tool is invoked, then disconnects immediately after. This is ideal for servers that:
|
||||
- Don't support long-running connections
|
||||
- Are designed for one-time use
|
||||
- Exit automatically after handling requests
|
||||
|
||||
**Use cases:**
|
||||
- PDF processing tools that exit after each operation
|
||||
- One-time command-line utilities
|
||||
- Servers with connection stability issues
|
||||
- Resource-intensive servers that shouldn't run continuously
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
{
|
||||
"pdf-reader": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "pdf-mcp-server"],
|
||||
"connectionMode": "on-demand",
|
||||
"env": {
|
||||
"PDF_CACHE_DIR": "/tmp/pdf-cache"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Benefits of on-demand mode:**
|
||||
- Avoids "Connection closed" errors for ephemeral services
|
||||
- Reduces resource usage for infrequently used tools
|
||||
- Better suited for stateless operations
|
||||
- Handles servers that automatically exit after operations
|
||||
|
||||
**Note:** On-demand servers briefly connect during initialization to discover available tools, then disconnect. The connection is re-established only when a tool from that server is actually invoked.
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Environment Variable Substitution
|
||||
|
||||
@@ -1,444 +0,0 @@
|
||||
# Cluster Configuration Examples
|
||||
|
||||
## Coordinator Node Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"fetch": {
|
||||
"command": "uvx",
|
||||
"args": ["mcp-server-fetch"],
|
||||
"enabled": true
|
||||
}
|
||||
},
|
||||
"users": [
|
||||
{
|
||||
"username": "admin",
|
||||
"password": "$2b$10$Vt7krIvjNgyN67LXqly0uOcTpN0LI55cYRbcKC71pUDAP0nJ7RPa.",
|
||||
"isAdmin": true
|
||||
}
|
||||
],
|
||||
"systemConfig": {
|
||||
"cluster": {
|
||||
"enabled": true,
|
||||
"mode": "coordinator",
|
||||
"coordinator": {
|
||||
"nodeTimeout": 15000,
|
||||
"cleanupInterval": 30000,
|
||||
"stickySessionTimeout": 3600000
|
||||
},
|
||||
"stickySession": {
|
||||
"enabled": true,
|
||||
"strategy": "consistent-hash",
|
||||
"cookieName": "MCPHUB_NODE",
|
||||
"headerName": "X-MCPHub-Node"
|
||||
}
|
||||
},
|
||||
"routing": {
|
||||
"enableGlobalRoute": true,
|
||||
"enableGroupNameRoute": true,
|
||||
"enableBearerAuth": false
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Worker Node 1 Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"amap": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@amap/amap-maps-mcp-server"],
|
||||
"env": {
|
||||
"AMAP_MAPS_API_KEY": "${AMAP_MAPS_API_KEY}"
|
||||
},
|
||||
"enabled": true
|
||||
},
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["@playwright/mcp@latest", "--headless"],
|
||||
"enabled": true
|
||||
}
|
||||
},
|
||||
"systemConfig": {
|
||||
"cluster": {
|
||||
"enabled": true,
|
||||
"mode": "node",
|
||||
"node": {
|
||||
"id": "node-1",
|
||||
"name": "Worker Node 1",
|
||||
"coordinatorUrl": "http://coordinator:3000",
|
||||
"heartbeatInterval": 5000,
|
||||
"registerOnStartup": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Worker Node 2 Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["@playwright/mcp@latest", "--headless"],
|
||||
"enabled": true
|
||||
},
|
||||
"slack": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-slack"],
|
||||
"env": {
|
||||
"SLACK_BOT_TOKEN": "${SLACK_BOT_TOKEN}",
|
||||
"SLACK_TEAM_ID": "${SLACK_TEAM_ID}"
|
||||
},
|
||||
"enabled": true
|
||||
}
|
||||
},
|
||||
"systemConfig": {
|
||||
"cluster": {
|
||||
"enabled": true,
|
||||
"mode": "node",
|
||||
"node": {
|
||||
"id": "node-2",
|
||||
"name": "Worker Node 2",
|
||||
"coordinatorUrl": "http://coordinator:3000",
|
||||
"heartbeatInterval": 5000,
|
||||
"registerOnStartup": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Docker Compose Example
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
coordinator:
|
||||
image: samanhappy/mcphub:latest
|
||||
container_name: mcphub-coordinator
|
||||
hostname: coordinator
|
||||
ports:
|
||||
- "3000:3000"
|
||||
volumes:
|
||||
- ./examples/coordinator-config.json:/app/mcp_settings.json
|
||||
- coordinator-data:/app/data
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
- PORT=3000
|
||||
networks:
|
||||
- mcphub-cluster
|
||||
restart: unless-stopped
|
||||
|
||||
node1:
|
||||
image: samanhappy/mcphub:latest
|
||||
container_name: mcphub-node1
|
||||
hostname: node1
|
||||
volumes:
|
||||
- ./examples/node1-config.json:/app/mcp_settings.json
|
||||
- node1-data:/app/data
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
- PORT=3001
|
||||
- AMAP_MAPS_API_KEY=${AMAP_MAPS_API_KEY}
|
||||
networks:
|
||||
- mcphub-cluster
|
||||
depends_on:
|
||||
- coordinator
|
||||
restart: unless-stopped
|
||||
|
||||
node2:
|
||||
image: samanhappy/mcphub:latest
|
||||
container_name: mcphub-node2
|
||||
hostname: node2
|
||||
volumes:
|
||||
- ./examples/node2-config.json:/app/mcp_settings.json
|
||||
- node2-data:/app/data
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
- PORT=3002
|
||||
- SLACK_BOT_TOKEN=${SLACK_BOT_TOKEN}
|
||||
- SLACK_TEAM_ID=${SLACK_TEAM_ID}
|
||||
networks:
|
||||
- mcphub-cluster
|
||||
depends_on:
|
||||
- coordinator
|
||||
restart: unless-stopped
|
||||
|
||||
networks:
|
||||
mcphub-cluster:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
coordinator-data:
|
||||
node1-data:
|
||||
node2-data:
|
||||
```
|
||||
|
||||
## Kubernetes Example
|
||||
|
||||
### ConfigMaps
|
||||
|
||||
**coordinator-config.yaml:**
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mcphub-coordinator-config
|
||||
namespace: mcphub
|
||||
data:
|
||||
mcp_settings.json: |
|
||||
{
|
||||
"mcpServers": {
|
||||
"fetch": {
|
||||
"command": "uvx",
|
||||
"args": ["mcp-server-fetch"],
|
||||
"enabled": true
|
||||
}
|
||||
},
|
||||
"users": [
|
||||
{
|
||||
"username": "admin",
|
||||
"password": "$2b$10$Vt7krIvjNgyN67LXqly0uOcTpN0LI55cYRbcKC71pUDAP0nJ7RPa.",
|
||||
"isAdmin": true
|
||||
}
|
||||
],
|
||||
"systemConfig": {
|
||||
"cluster": {
|
||||
"enabled": true,
|
||||
"mode": "coordinator",
|
||||
"coordinator": {
|
||||
"nodeTimeout": 15000,
|
||||
"cleanupInterval": 30000,
|
||||
"stickySessionTimeout": 3600000
|
||||
},
|
||||
"stickySession": {
|
||||
"enabled": true,
|
||||
"strategy": "consistent-hash"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**node-config.yaml:**
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mcphub-node-config
|
||||
namespace: mcphub
|
||||
data:
|
||||
mcp_settings.json: |
|
||||
{
|
||||
"mcpServers": {
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["@playwright/mcp@latest", "--headless"],
|
||||
"enabled": true
|
||||
}
|
||||
},
|
||||
"systemConfig": {
|
||||
"cluster": {
|
||||
"enabled": true,
|
||||
"mode": "node",
|
||||
"node": {
|
||||
"coordinatorUrl": "http://mcphub-coordinator:3000",
|
||||
"heartbeatInterval": 5000,
|
||||
"registerOnStartup": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Deployments
|
||||
|
||||
**coordinator.yaml:**
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mcphub-coordinator
|
||||
namespace: mcphub
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: mcphub-coordinator
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mcphub-coordinator
|
||||
spec:
|
||||
containers:
|
||||
- name: mcphub
|
||||
image: samanhappy/mcphub:latest
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
name: http
|
||||
env:
|
||||
- name: NODE_ENV
|
||||
value: production
|
||||
- name: PORT
|
||||
value: "3000"
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /app/mcp_settings.json
|
||||
subPath: mcp_settings.json
|
||||
- name: data
|
||||
mountPath: /app/data
|
||||
resources:
|
||||
requests:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "1000m"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /health
|
||||
port: 3000
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /health
|
||||
port: 3000
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 5
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: mcphub-coordinator-config
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mcphub-coordinator
|
||||
namespace: mcphub
|
||||
spec:
|
||||
selector:
|
||||
app: mcphub-coordinator
|
||||
ports:
|
||||
- port: 3000
|
||||
targetPort: 3000
|
||||
name: http
|
||||
type: LoadBalancer
|
||||
```
|
||||
|
||||
**nodes.yaml:**
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mcphub-node
|
||||
namespace: mcphub
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: mcphub-node
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mcphub-node
|
||||
spec:
|
||||
containers:
|
||||
- name: mcphub
|
||||
image: samanhappy/mcphub:latest
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: NODE_ENV
|
||||
value: production
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /app/mcp_settings.json
|
||||
subPath: mcp_settings.json
|
||||
- name: data
|
||||
mountPath: /app/data
|
||||
resources:
|
||||
requests:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
cpu: "2000m"
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: mcphub-node-config
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Create a `.env` file for sensitive values:
|
||||
|
||||
```bash
|
||||
# API Keys
|
||||
AMAP_MAPS_API_KEY=your-amap-api-key
|
||||
SLACK_BOT_TOKEN=xoxb-your-slack-bot-token
|
||||
SLACK_TEAM_ID=T01234567
|
||||
|
||||
# Optional: Custom ports
|
||||
COORDINATOR_PORT=3000
|
||||
NODE1_PORT=3001
|
||||
NODE2_PORT=3002
|
||||
```
|
||||
|
||||
## Testing the Cluster
|
||||
|
||||
After starting the cluster, test connectivity:
|
||||
|
||||
```bash
|
||||
# Check coordinator health
|
||||
curl http://localhost:3000/health
|
||||
|
||||
# Get cluster status
|
||||
curl http://localhost:3000/api/cluster/status
|
||||
|
||||
# List all nodes
|
||||
curl http://localhost:3000/api/cluster/nodes
|
||||
|
||||
# Test MCP endpoint
|
||||
curl -X POST http://localhost:3000/mcp \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"jsonrpc": "2.0",
|
||||
"method": "initialize",
|
||||
"params": {
|
||||
"protocolVersion": "2024-11-05",
|
||||
"capabilities": {},
|
||||
"clientInfo": {
|
||||
"name": "test-client",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
},
|
||||
"id": 1
|
||||
}'
|
||||
```
|
||||
|
||||
## Scaling
|
||||
|
||||
### Scale worker nodes (Docker Compose):
|
||||
|
||||
```bash
|
||||
docker-compose up -d --scale node1=3
|
||||
```
|
||||
|
||||
### Scale worker nodes (Kubernetes):
|
||||
|
||||
```bash
|
||||
kubectl scale deployment mcphub-node --replicas=5 -n mcphub
|
||||
```
|
||||
61
examples/mcp_settings_with_connection_modes.json
Normal file
61
examples/mcp_settings_with_connection_modes.json
Normal file
@@ -0,0 +1,61 @@
|
||||
{
|
||||
"$schema": "https://json-schema.org/draft-07/schema",
|
||||
"description": "Example MCP settings showing different connection modes",
|
||||
"mcpServers": {
|
||||
"github": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-github"],
|
||||
"connectionMode": "persistent",
|
||||
"env": {
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}"
|
||||
},
|
||||
"enabled": true
|
||||
},
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["@playwright/mcp@latest", "--headless"],
|
||||
"connectionMode": "persistent",
|
||||
"enabled": true
|
||||
},
|
||||
"pdf-reader": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "pdf-mcp-server"],
|
||||
"connectionMode": "on-demand",
|
||||
"env": {
|
||||
"PDF_CACHE_DIR": "/tmp/pdf-cache"
|
||||
},
|
||||
"enabled": true
|
||||
},
|
||||
"image-processor": {
|
||||
"command": "python",
|
||||
"args": ["-m", "image_mcp_server"],
|
||||
"connectionMode": "on-demand",
|
||||
"env": {
|
||||
"IMAGE_OUTPUT_DIR": "/tmp/images"
|
||||
},
|
||||
"enabled": true
|
||||
},
|
||||
"fetch": {
|
||||
"command": "uvx",
|
||||
"args": ["mcp-server-fetch"],
|
||||
"enabled": true
|
||||
},
|
||||
"slack": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-slack"],
|
||||
"connectionMode": "persistent",
|
||||
"env": {
|
||||
"SLACK_BOT_TOKEN": "${SLACK_BOT_TOKEN}",
|
||||
"SLACK_TEAM_ID": "${SLACK_TEAM_ID}"
|
||||
},
|
||||
"enabled": true
|
||||
}
|
||||
},
|
||||
"users": [
|
||||
{
|
||||
"username": "admin",
|
||||
"password": "$2b$10$Vt7krIvjNgyN67LXqly0uOcTpN0LI55cYRbcKC71pUDAP0nJ7RPa.",
|
||||
"isAdmin": true
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -5,7 +5,6 @@ export const PERMISSIONS = {
|
||||
SETTINGS_SKIP_AUTH: 'settings:skip_auth',
|
||||
SETTINGS_INSTALL_CONFIG: 'settings:install_config',
|
||||
SETTINGS_EXPORT_CONFIG: 'settings:export_config',
|
||||
SETTINGS_CLUSTER_CONFIG: 'settings:cluster_config',
|
||||
} as const;
|
||||
|
||||
export default PERMISSIONS;
|
||||
|
||||
@@ -34,35 +34,6 @@ interface MCPRouterConfig {
|
||||
baseUrl: string;
|
||||
}
|
||||
|
||||
interface ClusterNodeConfig {
|
||||
id?: string;
|
||||
name?: string;
|
||||
coordinatorUrl: string;
|
||||
heartbeatInterval?: number;
|
||||
registerOnStartup?: boolean;
|
||||
}
|
||||
|
||||
interface ClusterCoordinatorConfig {
|
||||
nodeTimeout?: number;
|
||||
cleanupInterval?: number;
|
||||
stickySessionTimeout?: number;
|
||||
}
|
||||
|
||||
interface ClusterStickySessionConfig {
|
||||
enabled: boolean;
|
||||
strategy: 'consistent-hash' | 'cookie' | 'header';
|
||||
cookieName?: string;
|
||||
headerName?: string;
|
||||
}
|
||||
|
||||
interface ClusterConfig {
|
||||
enabled: boolean;
|
||||
mode: 'standalone' | 'node' | 'coordinator';
|
||||
node?: ClusterNodeConfig;
|
||||
coordinator?: ClusterCoordinatorConfig;
|
||||
stickySession?: ClusterStickySessionConfig;
|
||||
}
|
||||
|
||||
interface SystemSettings {
|
||||
systemConfig?: {
|
||||
routing?: RoutingConfig;
|
||||
@@ -70,7 +41,6 @@ interface SystemSettings {
|
||||
smartRouting?: SmartRoutingConfig;
|
||||
mcpRouter?: MCPRouterConfig;
|
||||
nameSeparator?: string;
|
||||
cluster?: ClusterConfig;
|
||||
};
|
||||
}
|
||||
|
||||
@@ -115,27 +85,6 @@ export const useSettingsData = () => {
|
||||
baseUrl: 'https://api.mcprouter.to/v1',
|
||||
});
|
||||
|
||||
const [clusterConfig, setClusterConfig] = useState<ClusterConfig>({
|
||||
enabled: false,
|
||||
mode: 'standalone',
|
||||
node: {
|
||||
coordinatorUrl: '',
|
||||
heartbeatInterval: 5000,
|
||||
registerOnStartup: true,
|
||||
},
|
||||
coordinator: {
|
||||
nodeTimeout: 15000,
|
||||
cleanupInterval: 30000,
|
||||
stickySessionTimeout: 3600000,
|
||||
},
|
||||
stickySession: {
|
||||
enabled: true,
|
||||
strategy: 'consistent-hash',
|
||||
cookieName: 'MCPHUB_NODE',
|
||||
headerName: 'X-MCPHub-Node',
|
||||
},
|
||||
});
|
||||
|
||||
const [nameSeparator, setNameSeparator] = useState<string>('-');
|
||||
|
||||
const [loading, setLoading] = useState(false);
|
||||
@@ -192,28 +141,6 @@ export const useSettingsData = () => {
|
||||
if (data.success && data.data?.systemConfig?.nameSeparator !== undefined) {
|
||||
setNameSeparator(data.data.systemConfig.nameSeparator);
|
||||
}
|
||||
if (data.success && data.data?.systemConfig?.cluster) {
|
||||
setClusterConfig({
|
||||
enabled: data.data.systemConfig.cluster.enabled ?? false,
|
||||
mode: data.data.systemConfig.cluster.mode || 'standalone',
|
||||
node: data.data.systemConfig.cluster.node || {
|
||||
coordinatorUrl: '',
|
||||
heartbeatInterval: 5000,
|
||||
registerOnStartup: true,
|
||||
},
|
||||
coordinator: data.data.systemConfig.cluster.coordinator || {
|
||||
nodeTimeout: 15000,
|
||||
cleanupInterval: 30000,
|
||||
stickySessionTimeout: 3600000,
|
||||
},
|
||||
stickySession: data.data.systemConfig.cluster.stickySession || {
|
||||
enabled: true,
|
||||
strategy: 'consistent-hash',
|
||||
cookieName: 'MCPHUB_NODE',
|
||||
headerName: 'X-MCPHub-Node',
|
||||
},
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to fetch settings:', error);
|
||||
setError(error instanceof Error ? error.message : 'Failed to fetch settings');
|
||||
@@ -493,39 +420,6 @@ export const useSettingsData = () => {
|
||||
}
|
||||
};
|
||||
|
||||
// Update cluster configuration
|
||||
const updateClusterConfig = async (updates: Partial<ClusterConfig>) => {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
|
||||
try {
|
||||
const data = await apiPut('/system-config', {
|
||||
cluster: updates,
|
||||
});
|
||||
|
||||
if (data.success) {
|
||||
setClusterConfig({
|
||||
...clusterConfig,
|
||||
...updates,
|
||||
});
|
||||
showToast(t('settings.systemConfigUpdated'));
|
||||
return true;
|
||||
} else {
|
||||
showToast(data.message || t('errors.failedToUpdateSystemConfig'));
|
||||
return false;
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to update cluster config:', error);
|
||||
const errorMessage =
|
||||
error instanceof Error ? error.message : 'Failed to update cluster config';
|
||||
setError(errorMessage);
|
||||
showToast(errorMessage);
|
||||
return false;
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const exportMCPSettings = async (serverName?: string) => {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
@@ -561,7 +455,6 @@ export const useSettingsData = () => {
|
||||
installConfig,
|
||||
smartRoutingConfig,
|
||||
mcpRouterConfig,
|
||||
clusterConfig,
|
||||
nameSeparator,
|
||||
loading,
|
||||
error,
|
||||
@@ -575,7 +468,6 @@ export const useSettingsData = () => {
|
||||
updateRoutingConfigBatch,
|
||||
updateMCPRouterConfig,
|
||||
updateMCPRouterConfigBatch,
|
||||
updateClusterConfig,
|
||||
updateNameSeparator,
|
||||
exportMCPSettings,
|
||||
};
|
||||
|
||||
@@ -1,99 +1,55 @@
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { useNavigate } from 'react-router-dom';
|
||||
import ChangePasswordForm from '@/components/ChangePasswordForm';
|
||||
import { Switch } from '@/components/ui/ToggleGroup';
|
||||
import { useSettingsData } from '@/hooks/useSettingsData';
|
||||
import { useToast } from '@/contexts/ToastContext';
|
||||
import { generateRandomKey } from '@/utils/key';
|
||||
import { PermissionChecker } from '@/components/PermissionChecker';
|
||||
import { PERMISSIONS } from '@/constants/permissions';
|
||||
import { Copy, Check, Download } from 'lucide-react';
|
||||
import React, { useState, useEffect } from 'react'
|
||||
import { useTranslation } from 'react-i18next'
|
||||
import { useNavigate } from 'react-router-dom'
|
||||
import ChangePasswordForm from '@/components/ChangePasswordForm'
|
||||
import { Switch } from '@/components/ui/ToggleGroup'
|
||||
import { useSettingsData } from '@/hooks/useSettingsData'
|
||||
import { useToast } from '@/contexts/ToastContext'
|
||||
import { generateRandomKey } from '@/utils/key'
|
||||
import { PermissionChecker } from '@/components/PermissionChecker'
|
||||
import { PERMISSIONS } from '@/constants/permissions'
|
||||
import { Copy, Check, Download } from 'lucide-react'
|
||||
|
||||
const SettingsPage: React.FC = () => {
|
||||
const { t } = useTranslation();
|
||||
const navigate = useNavigate();
|
||||
const { showToast } = useToast();
|
||||
const { t } = useTranslation()
|
||||
const navigate = useNavigate()
|
||||
const { showToast } = useToast()
|
||||
|
||||
const [installConfig, setInstallConfig] = useState<{
|
||||
pythonIndexUrl: string;
|
||||
npmRegistry: string;
|
||||
baseUrl: string;
|
||||
pythonIndexUrl: string
|
||||
npmRegistry: string
|
||||
baseUrl: string
|
||||
}>({
|
||||
pythonIndexUrl: '',
|
||||
npmRegistry: '',
|
||||
baseUrl: 'http://localhost:3000',
|
||||
});
|
||||
})
|
||||
|
||||
const [tempSmartRoutingConfig, setTempSmartRoutingConfig] = useState<{
|
||||
dbUrl: string;
|
||||
openaiApiBaseUrl: string;
|
||||
openaiApiKey: string;
|
||||
openaiApiEmbeddingModel: string;
|
||||
dbUrl: string
|
||||
openaiApiBaseUrl: string
|
||||
openaiApiKey: string
|
||||
openaiApiEmbeddingModel: string
|
||||
}>({
|
||||
dbUrl: '',
|
||||
openaiApiBaseUrl: '',
|
||||
openaiApiKey: '',
|
||||
openaiApiEmbeddingModel: '',
|
||||
});
|
||||
})
|
||||
|
||||
const [tempMCPRouterConfig, setTempMCPRouterConfig] = useState<{
|
||||
apiKey: string;
|
||||
referer: string;
|
||||
title: string;
|
||||
baseUrl: string;
|
||||
apiKey: string
|
||||
referer: string
|
||||
title: string
|
||||
baseUrl: string
|
||||
}>({
|
||||
apiKey: '',
|
||||
referer: 'https://www.mcphubx.com',
|
||||
title: 'MCPHub',
|
||||
baseUrl: 'https://api.mcprouter.to/v1',
|
||||
});
|
||||
})
|
||||
|
||||
const [tempNameSeparator, setTempNameSeparator] = useState<string>('-');
|
||||
|
||||
const [tempClusterConfig, setTempClusterConfig] = useState<{
|
||||
enabled: boolean;
|
||||
mode: 'standalone' | 'node' | 'coordinator';
|
||||
node: {
|
||||
id?: string;
|
||||
name?: string;
|
||||
coordinatorUrl: string;
|
||||
heartbeatInterval?: number;
|
||||
registerOnStartup?: boolean;
|
||||
};
|
||||
coordinator: {
|
||||
nodeTimeout?: number;
|
||||
cleanupInterval?: number;
|
||||
stickySessionTimeout?: number;
|
||||
};
|
||||
stickySession: {
|
||||
enabled: boolean;
|
||||
strategy: 'consistent-hash' | 'cookie' | 'header';
|
||||
cookieName?: string;
|
||||
headerName?: string;
|
||||
};
|
||||
}>({
|
||||
enabled: false,
|
||||
mode: 'standalone',
|
||||
node: {
|
||||
id: '',
|
||||
name: '',
|
||||
coordinatorUrl: '',
|
||||
heartbeatInterval: 5000,
|
||||
registerOnStartup: true,
|
||||
},
|
||||
coordinator: {
|
||||
nodeTimeout: 15000,
|
||||
cleanupInterval: 30000,
|
||||
stickySessionTimeout: 3600000,
|
||||
},
|
||||
stickySession: {
|
||||
enabled: true,
|
||||
strategy: 'consistent-hash',
|
||||
cookieName: 'MCPHUB_NODE',
|
||||
headerName: 'X-MCPHub-Node',
|
||||
},
|
||||
});
|
||||
const [tempNameSeparator, setTempNameSeparator] = useState<string>('-')
|
||||
|
||||
const {
|
||||
routingConfig,
|
||||
@@ -102,7 +58,6 @@ const SettingsPage: React.FC = () => {
|
||||
installConfig: savedInstallConfig,
|
||||
smartRoutingConfig,
|
||||
mcpRouterConfig,
|
||||
clusterConfig,
|
||||
nameSeparator,
|
||||
loading,
|
||||
updateRoutingConfig,
|
||||
@@ -111,17 +66,16 @@ const SettingsPage: React.FC = () => {
|
||||
updateSmartRoutingConfig,
|
||||
updateSmartRoutingConfigBatch,
|
||||
updateMCPRouterConfig,
|
||||
updateClusterConfig,
|
||||
updateNameSeparator,
|
||||
exportMCPSettings,
|
||||
} = useSettingsData();
|
||||
} = useSettingsData()
|
||||
|
||||
// Update local installConfig when savedInstallConfig changes
|
||||
useEffect(() => {
|
||||
if (savedInstallConfig) {
|
||||
setInstallConfig(savedInstallConfig);
|
||||
setInstallConfig(savedInstallConfig)
|
||||
}
|
||||
}, [savedInstallConfig]);
|
||||
}, [savedInstallConfig])
|
||||
|
||||
// Update local tempSmartRoutingConfig when smartRoutingConfig changes
|
||||
useEffect(() => {
|
||||
@@ -131,9 +85,9 @@ const SettingsPage: React.FC = () => {
|
||||
openaiApiBaseUrl: smartRoutingConfig.openaiApiBaseUrl || '',
|
||||
openaiApiKey: smartRoutingConfig.openaiApiKey || '',
|
||||
openaiApiEmbeddingModel: smartRoutingConfig.openaiApiEmbeddingModel || '',
|
||||
});
|
||||
})
|
||||
}
|
||||
}, [smartRoutingConfig]);
|
||||
}, [smartRoutingConfig])
|
||||
|
||||
// Update local tempMCPRouterConfig when mcpRouterConfig changes
|
||||
useEffect(() => {
|
||||
@@ -143,53 +97,24 @@ const SettingsPage: React.FC = () => {
|
||||
referer: mcpRouterConfig.referer || 'https://www.mcphubx.com',
|
||||
title: mcpRouterConfig.title || 'MCPHub',
|
||||
baseUrl: mcpRouterConfig.baseUrl || 'https://api.mcprouter.to/v1',
|
||||
});
|
||||
})
|
||||
}
|
||||
}, [mcpRouterConfig]);
|
||||
}, [mcpRouterConfig])
|
||||
|
||||
// Update local tempNameSeparator when nameSeparator changes
|
||||
useEffect(() => {
|
||||
setTempNameSeparator(nameSeparator);
|
||||
}, [nameSeparator]);
|
||||
|
||||
// Update local tempClusterConfig when clusterConfig changes
|
||||
useEffect(() => {
|
||||
if (clusterConfig) {
|
||||
setTempClusterConfig({
|
||||
enabled: clusterConfig.enabled ?? false,
|
||||
mode: clusterConfig.mode || 'standalone',
|
||||
node: clusterConfig.node || {
|
||||
id: '',
|
||||
name: '',
|
||||
coordinatorUrl: '',
|
||||
heartbeatInterval: 5000,
|
||||
registerOnStartup: true,
|
||||
},
|
||||
coordinator: clusterConfig.coordinator || {
|
||||
nodeTimeout: 15000,
|
||||
cleanupInterval: 30000,
|
||||
stickySessionTimeout: 3600000,
|
||||
},
|
||||
stickySession: clusterConfig.stickySession || {
|
||||
enabled: true,
|
||||
strategy: 'consistent-hash',
|
||||
cookieName: 'MCPHUB_NODE',
|
||||
headerName: 'X-MCPHub-Node',
|
||||
},
|
||||
});
|
||||
}
|
||||
}, [clusterConfig]);
|
||||
setTempNameSeparator(nameSeparator)
|
||||
}, [nameSeparator])
|
||||
|
||||
const [sectionsVisible, setSectionsVisible] = useState({
|
||||
routingConfig: false,
|
||||
installConfig: false,
|
||||
smartRoutingConfig: false,
|
||||
mcpRouterConfig: false,
|
||||
clusterConfig: false,
|
||||
nameSeparator: false,
|
||||
password: false,
|
||||
exportConfig: false,
|
||||
});
|
||||
})
|
||||
|
||||
const toggleSection = (
|
||||
section:
|
||||
@@ -197,7 +122,6 @@ const SettingsPage: React.FC = () => {
|
||||
| 'installConfig'
|
||||
| 'smartRoutingConfig'
|
||||
| 'mcpRouterConfig'
|
||||
| 'clusterConfig'
|
||||
| 'nameSeparator'
|
||||
| 'password'
|
||||
| 'exportConfig',
|
||||
@@ -205,8 +129,8 @@ const SettingsPage: React.FC = () => {
|
||||
setSectionsVisible((prev) => ({
|
||||
...prev,
|
||||
[section]: !prev[section],
|
||||
}));
|
||||
};
|
||||
}))
|
||||
}
|
||||
|
||||
const handleRoutingConfigChange = async (
|
||||
key:
|
||||
@@ -220,39 +144,39 @@ const SettingsPage: React.FC = () => {
|
||||
// If enableBearerAuth is turned on and there's no key, generate one first
|
||||
if (key === 'enableBearerAuth' && value === true) {
|
||||
if (!tempRoutingConfig.bearerAuthKey && !routingConfig.bearerAuthKey) {
|
||||
const newKey = generateRandomKey();
|
||||
handleBearerAuthKeyChange(newKey);
|
||||
const newKey = generateRandomKey()
|
||||
handleBearerAuthKeyChange(newKey)
|
||||
|
||||
// Update both enableBearerAuth and bearerAuthKey in a single call
|
||||
const success = await updateRoutingConfigBatch({
|
||||
enableBearerAuth: true,
|
||||
bearerAuthKey: newKey,
|
||||
});
|
||||
})
|
||||
|
||||
if (success) {
|
||||
// Update tempRoutingConfig to reflect the saved values
|
||||
setTempRoutingConfig((prev) => ({
|
||||
...prev,
|
||||
bearerAuthKey: newKey,
|
||||
}));
|
||||
}))
|
||||
}
|
||||
return;
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
await updateRoutingConfig(key, value);
|
||||
};
|
||||
await updateRoutingConfig(key, value)
|
||||
}
|
||||
|
||||
const handleBearerAuthKeyChange = (value: string) => {
|
||||
setTempRoutingConfig((prev) => ({
|
||||
...prev,
|
||||
bearerAuthKey: value,
|
||||
}));
|
||||
};
|
||||
}))
|
||||
}
|
||||
|
||||
const saveBearerAuthKey = async () => {
|
||||
await updateRoutingConfig('bearerAuthKey', tempRoutingConfig.bearerAuthKey);
|
||||
};
|
||||
await updateRoutingConfig('bearerAuthKey', tempRoutingConfig.bearerAuthKey)
|
||||
}
|
||||
|
||||
const handleInstallConfigChange = (
|
||||
key: 'pythonIndexUrl' | 'npmRegistry' | 'baseUrl',
|
||||
@@ -261,12 +185,12 @@ const SettingsPage: React.FC = () => {
|
||||
setInstallConfig({
|
||||
...installConfig,
|
||||
[key]: value,
|
||||
});
|
||||
};
|
||||
})
|
||||
}
|
||||
|
||||
const saveInstallConfig = async (key: 'pythonIndexUrl' | 'npmRegistry' | 'baseUrl') => {
|
||||
await updateInstallConfig(key, installConfig[key]);
|
||||
};
|
||||
await updateInstallConfig(key, installConfig[key])
|
||||
}
|
||||
|
||||
const handleSmartRoutingConfigChange = (
|
||||
key: 'dbUrl' | 'openaiApiBaseUrl' | 'openaiApiKey' | 'openaiApiEmbeddingModel',
|
||||
@@ -275,14 +199,14 @@ const SettingsPage: React.FC = () => {
|
||||
setTempSmartRoutingConfig({
|
||||
...tempSmartRoutingConfig,
|
||||
[key]: value,
|
||||
});
|
||||
};
|
||||
})
|
||||
}
|
||||
|
||||
const saveSmartRoutingConfig = async (
|
||||
key: 'dbUrl' | 'openaiApiBaseUrl' | 'openaiApiKey' | 'openaiApiEmbeddingModel',
|
||||
) => {
|
||||
await updateSmartRoutingConfig(key, tempSmartRoutingConfig[key]);
|
||||
};
|
||||
await updateSmartRoutingConfig(key, tempSmartRoutingConfig[key])
|
||||
}
|
||||
|
||||
const handleMCPRouterConfigChange = (
|
||||
key: 'apiKey' | 'referer' | 'title' | 'baseUrl',
|
||||
@@ -291,141 +215,141 @@ const SettingsPage: React.FC = () => {
|
||||
setTempMCPRouterConfig({
|
||||
...tempMCPRouterConfig,
|
||||
[key]: value,
|
||||
});
|
||||
};
|
||||
})
|
||||
}
|
||||
|
||||
const saveMCPRouterConfig = async (key: 'apiKey' | 'referer' | 'title' | 'baseUrl') => {
|
||||
await updateMCPRouterConfig(key, tempMCPRouterConfig[key]);
|
||||
};
|
||||
await updateMCPRouterConfig(key, tempMCPRouterConfig[key])
|
||||
}
|
||||
|
||||
const saveNameSeparator = async () => {
|
||||
await updateNameSeparator(tempNameSeparator);
|
||||
};
|
||||
await updateNameSeparator(tempNameSeparator)
|
||||
}
|
||||
|
||||
const handleSmartRoutingEnabledChange = async (value: boolean) => {
|
||||
// If enabling Smart Routing, validate required fields and save any unsaved changes
|
||||
if (value) {
|
||||
const currentDbUrl = tempSmartRoutingConfig.dbUrl || smartRoutingConfig.dbUrl;
|
||||
const currentDbUrl = tempSmartRoutingConfig.dbUrl || smartRoutingConfig.dbUrl
|
||||
const currentOpenaiApiKey =
|
||||
tempSmartRoutingConfig.openaiApiKey || smartRoutingConfig.openaiApiKey;
|
||||
tempSmartRoutingConfig.openaiApiKey || smartRoutingConfig.openaiApiKey
|
||||
|
||||
if (!currentDbUrl || !currentOpenaiApiKey) {
|
||||
const missingFields = [];
|
||||
if (!currentDbUrl) missingFields.push(t('settings.dbUrl'));
|
||||
if (!currentOpenaiApiKey) missingFields.push(t('settings.openaiApiKey'));
|
||||
const missingFields = []
|
||||
if (!currentDbUrl) missingFields.push(t('settings.dbUrl'))
|
||||
if (!currentOpenaiApiKey) missingFields.push(t('settings.openaiApiKey'))
|
||||
|
||||
showToast(
|
||||
t('settings.smartRoutingValidationError', {
|
||||
fields: missingFields.join(', '),
|
||||
}),
|
||||
);
|
||||
return;
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
// Prepare updates object with unsaved changes and enabled status
|
||||
const updates: any = { enabled: value };
|
||||
const updates: any = { enabled: value }
|
||||
|
||||
// Check for unsaved changes and include them in the batch update
|
||||
if (tempSmartRoutingConfig.dbUrl !== smartRoutingConfig.dbUrl) {
|
||||
updates.dbUrl = tempSmartRoutingConfig.dbUrl;
|
||||
updates.dbUrl = tempSmartRoutingConfig.dbUrl
|
||||
}
|
||||
if (tempSmartRoutingConfig.openaiApiBaseUrl !== smartRoutingConfig.openaiApiBaseUrl) {
|
||||
updates.openaiApiBaseUrl = tempSmartRoutingConfig.openaiApiBaseUrl;
|
||||
updates.openaiApiBaseUrl = tempSmartRoutingConfig.openaiApiBaseUrl
|
||||
}
|
||||
if (tempSmartRoutingConfig.openaiApiKey !== smartRoutingConfig.openaiApiKey) {
|
||||
updates.openaiApiKey = tempSmartRoutingConfig.openaiApiKey;
|
||||
updates.openaiApiKey = tempSmartRoutingConfig.openaiApiKey
|
||||
}
|
||||
if (
|
||||
tempSmartRoutingConfig.openaiApiEmbeddingModel !==
|
||||
smartRoutingConfig.openaiApiEmbeddingModel
|
||||
) {
|
||||
updates.openaiApiEmbeddingModel = tempSmartRoutingConfig.openaiApiEmbeddingModel;
|
||||
updates.openaiApiEmbeddingModel = tempSmartRoutingConfig.openaiApiEmbeddingModel
|
||||
}
|
||||
|
||||
// Save all changes in a single batch update
|
||||
await updateSmartRoutingConfigBatch(updates);
|
||||
await updateSmartRoutingConfigBatch(updates)
|
||||
} else {
|
||||
// If disabling, just update the enabled status
|
||||
await updateSmartRoutingConfig('enabled', value);
|
||||
await updateSmartRoutingConfig('enabled', value)
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
const handlePasswordChangeSuccess = () => {
|
||||
setTimeout(() => {
|
||||
navigate('/');
|
||||
}, 2000);
|
||||
};
|
||||
navigate('/')
|
||||
}, 2000)
|
||||
}
|
||||
|
||||
const [copiedConfig, setCopiedConfig] = useState(false);
|
||||
const [mcpSettingsJson, setMcpSettingsJson] = useState<string>('');
|
||||
const [copiedConfig, setCopiedConfig] = useState(false)
|
||||
const [mcpSettingsJson, setMcpSettingsJson] = useState<string>('')
|
||||
|
||||
const fetchMcpSettings = async () => {
|
||||
try {
|
||||
const result = await exportMCPSettings();
|
||||
console.log('Fetched MCP settings:', result);
|
||||
const configJson = JSON.stringify(result.data, null, 2);
|
||||
setMcpSettingsJson(configJson);
|
||||
const result = await exportMCPSettings()
|
||||
console.log('Fetched MCP settings:', result)
|
||||
const configJson = JSON.stringify(result.data, null, 2)
|
||||
setMcpSettingsJson(configJson)
|
||||
} catch (error) {
|
||||
console.error('Error fetching MCP settings:', error);
|
||||
showToast(t('settings.exportError') || 'Failed to fetch settings', 'error');
|
||||
console.error('Error fetching MCP settings:', error)
|
||||
showToast(t('settings.exportError') || 'Failed to fetch settings', 'error')
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
useEffect(() => {
|
||||
if (sectionsVisible.exportConfig && !mcpSettingsJson) {
|
||||
fetchMcpSettings();
|
||||
fetchMcpSettings()
|
||||
}
|
||||
}, [sectionsVisible.exportConfig]);
|
||||
}, [sectionsVisible.exportConfig])
|
||||
|
||||
const handleCopyConfig = async () => {
|
||||
if (!mcpSettingsJson) return;
|
||||
if (!mcpSettingsJson) return
|
||||
|
||||
try {
|
||||
if (navigator.clipboard && window.isSecureContext) {
|
||||
await navigator.clipboard.writeText(mcpSettingsJson);
|
||||
setCopiedConfig(true);
|
||||
showToast(t('common.copySuccess') || 'Copied to clipboard', 'success');
|
||||
setTimeout(() => setCopiedConfig(false), 2000);
|
||||
await navigator.clipboard.writeText(mcpSettingsJson)
|
||||
setCopiedConfig(true)
|
||||
showToast(t('common.copySuccess') || 'Copied to clipboard', 'success')
|
||||
setTimeout(() => setCopiedConfig(false), 2000)
|
||||
} else {
|
||||
// Fallback for HTTP or unsupported clipboard API
|
||||
const textArea = document.createElement('textarea');
|
||||
textArea.value = mcpSettingsJson;
|
||||
textArea.style.position = 'fixed';
|
||||
textArea.style.left = '-9999px';
|
||||
document.body.appendChild(textArea);
|
||||
textArea.focus();
|
||||
textArea.select();
|
||||
const textArea = document.createElement('textarea')
|
||||
textArea.value = mcpSettingsJson
|
||||
textArea.style.position = 'fixed'
|
||||
textArea.style.left = '-9999px'
|
||||
document.body.appendChild(textArea)
|
||||
textArea.focus()
|
||||
textArea.select()
|
||||
try {
|
||||
document.execCommand('copy');
|
||||
setCopiedConfig(true);
|
||||
showToast(t('common.copySuccess') || 'Copied to clipboard', 'success');
|
||||
setTimeout(() => setCopiedConfig(false), 2000);
|
||||
document.execCommand('copy')
|
||||
setCopiedConfig(true)
|
||||
showToast(t('common.copySuccess') || 'Copied to clipboard', 'success')
|
||||
setTimeout(() => setCopiedConfig(false), 2000)
|
||||
} catch (err) {
|
||||
showToast(t('common.copyFailed') || 'Copy failed', 'error');
|
||||
console.error('Copy to clipboard failed:', err);
|
||||
showToast(t('common.copyFailed') || 'Copy failed', 'error')
|
||||
console.error('Copy to clipboard failed:', err)
|
||||
}
|
||||
document.body.removeChild(textArea);
|
||||
document.body.removeChild(textArea)
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error copying configuration:', error);
|
||||
showToast(t('common.copyFailed') || 'Copy failed', 'error');
|
||||
console.error('Error copying configuration:', error)
|
||||
showToast(t('common.copyFailed') || 'Copy failed', 'error')
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
const handleDownloadConfig = () => {
|
||||
if (!mcpSettingsJson) return;
|
||||
if (!mcpSettingsJson) return
|
||||
|
||||
const blob = new Blob([mcpSettingsJson], { type: 'application/json' });
|
||||
const url = URL.createObjectURL(blob);
|
||||
const link = document.createElement('a');
|
||||
link.href = url;
|
||||
link.download = 'mcp_settings.json';
|
||||
document.body.appendChild(link);
|
||||
link.click();
|
||||
document.body.removeChild(link);
|
||||
URL.revokeObjectURL(url);
|
||||
showToast(t('settings.exportSuccess') || 'Settings exported successfully', 'success');
|
||||
};
|
||||
const blob = new Blob([mcpSettingsJson], { type: 'application/json' })
|
||||
const url = URL.createObjectURL(blob)
|
||||
const link = document.createElement('a')
|
||||
link.href = url
|
||||
link.download = 'mcp_settings.json'
|
||||
document.body.appendChild(link)
|
||||
link.click()
|
||||
document.body.removeChild(link)
|
||||
URL.revokeObjectURL(url)
|
||||
showToast(t('settings.exportSuccess') || 'Settings exported successfully', 'success')
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="container mx-auto">
|
||||
@@ -639,432 +563,6 @@ const SettingsPage: React.FC = () => {
|
||||
</div>
|
||||
</PermissionChecker>
|
||||
|
||||
{/* Cluster Configuration Settings */}
|
||||
<PermissionChecker permissions={PERMISSIONS.SETTINGS_CLUSTER_CONFIG}>
|
||||
<div className="bg-white shadow rounded-lg py-4 px-6 mb-6 page-card dashboard-card">
|
||||
<div
|
||||
className="flex justify-between items-center cursor-pointer transition-colors duration-200 hover:text-blue-600"
|
||||
onClick={() => toggleSection('clusterConfig')}
|
||||
>
|
||||
<h2 className="font-semibold text-gray-800">{t('settings.clusterConfig')}</h2>
|
||||
<span className="text-gray-500 transition-transform duration-200">
|
||||
{sectionsVisible.clusterConfig ? '▼' : '►'}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
{sectionsVisible.clusterConfig && (
|
||||
<div className="space-y-4 mt-4">
|
||||
{/* Enable Cluster Mode */}
|
||||
<div className="flex items-center justify-between p-3 bg-gray-50 rounded-md">
|
||||
<div>
|
||||
<h3 className="font-medium text-gray-700">{t('settings.clusterEnabled')}</h3>
|
||||
<p className="text-sm text-gray-500">{t('settings.clusterEnabledDescription')}</p>
|
||||
</div>
|
||||
<Switch
|
||||
disabled={loading}
|
||||
checked={tempClusterConfig.enabled}
|
||||
onCheckedChange={(checked) => {
|
||||
setTempClusterConfig((prev) => ({ ...prev, enabled: checked }));
|
||||
updateClusterConfig({ enabled: checked });
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Cluster Mode Selection */}
|
||||
{tempClusterConfig.enabled && (
|
||||
<div className="p-3 bg-gray-50 rounded-md">
|
||||
<div className="mb-2">
|
||||
<h3 className="font-medium text-gray-700">{t('settings.clusterMode')}</h3>
|
||||
<p className="text-sm text-gray-500">{t('settings.clusterModeDescription')}</p>
|
||||
</div>
|
||||
<select
|
||||
value={tempClusterConfig.mode}
|
||||
onChange={(e) => {
|
||||
const mode = e.target.value as 'standalone' | 'node' | 'coordinator';
|
||||
setTempClusterConfig((prev) => ({ ...prev, mode }));
|
||||
updateClusterConfig({ mode });
|
||||
}}
|
||||
className="mt-1 block w-full py-2 px-3 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm"
|
||||
disabled={loading}
|
||||
>
|
||||
<option value="standalone">{t('settings.clusterModeStandalone')}</option>
|
||||
<option value="node">{t('settings.clusterModeNode')}</option>
|
||||
<option value="coordinator">{t('settings.clusterModeCoordinator')}</option>
|
||||
</select>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Node Configuration */}
|
||||
{tempClusterConfig.enabled && tempClusterConfig.mode === 'node' && (
|
||||
<div className="p-3 bg-blue-50 border border-blue-200 rounded-md space-y-3">
|
||||
<h3 className="font-semibold text-gray-800 mb-2">{t('settings.nodeConfig')}</h3>
|
||||
|
||||
{/* Coordinator URL */}
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.coordinatorUrl')} <span className="text-red-500">*</span>
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mb-2">
|
||||
{t('settings.coordinatorUrlDescription')}
|
||||
</p>
|
||||
<input
|
||||
type="text"
|
||||
value={tempClusterConfig.node.coordinatorUrl}
|
||||
onChange={(e) => {
|
||||
const coordinatorUrl = e.target.value;
|
||||
setTempClusterConfig((prev) => ({
|
||||
...prev,
|
||||
node: { ...prev.node, coordinatorUrl },
|
||||
}));
|
||||
}}
|
||||
onBlur={() => updateClusterConfig({ node: { ...tempClusterConfig.node } })}
|
||||
placeholder={t('settings.coordinatorUrlPlaceholder')}
|
||||
className="block w-full py-2 px-3 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm"
|
||||
disabled={loading}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Node ID */}
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.nodeId')}
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mb-2">{t('settings.nodeIdDescription')}</p>
|
||||
<input
|
||||
type="text"
|
||||
value={tempClusterConfig.node.id || ''}
|
||||
onChange={(e) => {
|
||||
const id = e.target.value;
|
||||
setTempClusterConfig((prev) => ({
|
||||
...prev,
|
||||
node: { ...prev.node, id },
|
||||
}));
|
||||
}}
|
||||
onBlur={() => updateClusterConfig({ node: { ...tempClusterConfig.node } })}
|
||||
placeholder={t('settings.nodeIdPlaceholder')}
|
||||
className="block w-full py-2 px-3 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm"
|
||||
disabled={loading}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Node Name */}
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.nodeName')}
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mb-2">
|
||||
{t('settings.nodeNameDescription')}
|
||||
</p>
|
||||
<input
|
||||
type="text"
|
||||
value={tempClusterConfig.node.name || ''}
|
||||
onChange={(e) => {
|
||||
const name = e.target.value;
|
||||
setTempClusterConfig((prev) => ({
|
||||
...prev,
|
||||
node: { ...prev.node, name },
|
||||
}));
|
||||
}}
|
||||
onBlur={() => updateClusterConfig({ node: { ...tempClusterConfig.node } })}
|
||||
placeholder={t('settings.nodeNamePlaceholder')}
|
||||
className="block w-full py-2 px-3 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm"
|
||||
disabled={loading}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Heartbeat Interval */}
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.heartbeatInterval')}
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mb-2">
|
||||
{t('settings.heartbeatIntervalDescription')}
|
||||
</p>
|
||||
<input
|
||||
type="number"
|
||||
value={tempClusterConfig.node.heartbeatInterval || 5000}
|
||||
onChange={(e) => {
|
||||
const heartbeatInterval = parseInt(e.target.value);
|
||||
setTempClusterConfig((prev) => ({
|
||||
...prev,
|
||||
node: { ...prev.node, heartbeatInterval },
|
||||
}));
|
||||
}}
|
||||
onBlur={() => updateClusterConfig({ node: { ...tempClusterConfig.node } })}
|
||||
placeholder={t('settings.heartbeatIntervalPlaceholder')}
|
||||
className="block w-full py-2 px-3 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm"
|
||||
disabled={loading}
|
||||
min="1000"
|
||||
step="1000"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Register on Startup */}
|
||||
<div className="flex items-center justify-between">
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700">
|
||||
{t('settings.registerOnStartup')}
|
||||
</label>
|
||||
<p className="text-xs text-gray-500">
|
||||
{t('settings.registerOnStartupDescription')}
|
||||
</p>
|
||||
</div>
|
||||
<Switch
|
||||
disabled={loading}
|
||||
checked={tempClusterConfig.node.registerOnStartup ?? true}
|
||||
onCheckedChange={(checked) => {
|
||||
setTempClusterConfig((prev) => ({
|
||||
...prev,
|
||||
node: { ...prev.node, registerOnStartup: checked },
|
||||
}));
|
||||
updateClusterConfig({
|
||||
node: { ...tempClusterConfig.node, registerOnStartup: checked },
|
||||
});
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Coordinator Configuration */}
|
||||
{tempClusterConfig.enabled && tempClusterConfig.mode === 'coordinator' && (
|
||||
<div className="p-3 bg-purple-50 border border-purple-200 rounded-md space-y-3">
|
||||
<h3 className="font-semibold text-gray-800 mb-2">
|
||||
{t('settings.coordinatorConfig')}
|
||||
</h3>
|
||||
|
||||
{/* Node Timeout */}
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.nodeTimeout')}
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mb-2">
|
||||
{t('settings.nodeTimeoutDescription')}
|
||||
</p>
|
||||
<input
|
||||
type="number"
|
||||
value={tempClusterConfig.coordinator.nodeTimeout || 15000}
|
||||
onChange={(e) => {
|
||||
const nodeTimeout = parseInt(e.target.value);
|
||||
setTempClusterConfig((prev) => ({
|
||||
...prev,
|
||||
coordinator: { ...prev.coordinator, nodeTimeout },
|
||||
}));
|
||||
}}
|
||||
onBlur={() =>
|
||||
updateClusterConfig({ coordinator: { ...tempClusterConfig.coordinator } })
|
||||
}
|
||||
placeholder={t('settings.nodeTimeoutPlaceholder')}
|
||||
className="block w-full py-2 px-3 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm"
|
||||
disabled={loading}
|
||||
min="5000"
|
||||
step="1000"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Cleanup Interval */}
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.cleanupInterval')}
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mb-2">
|
||||
{t('settings.cleanupIntervalDescription')}
|
||||
</p>
|
||||
<input
|
||||
type="number"
|
||||
value={tempClusterConfig.coordinator.cleanupInterval || 30000}
|
||||
onChange={(e) => {
|
||||
const cleanupInterval = parseInt(e.target.value);
|
||||
setTempClusterConfig((prev) => ({
|
||||
...prev,
|
||||
coordinator: { ...prev.coordinator, cleanupInterval },
|
||||
}));
|
||||
}}
|
||||
onBlur={() =>
|
||||
updateClusterConfig({ coordinator: { ...tempClusterConfig.coordinator } })
|
||||
}
|
||||
placeholder={t('settings.cleanupIntervalPlaceholder')}
|
||||
className="block w-full py-2 px-3 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm"
|
||||
disabled={loading}
|
||||
min="10000"
|
||||
step="5000"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Sticky Session Timeout */}
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.stickySessionTimeout')}
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mb-2">
|
||||
{t('settings.stickySessionTimeoutDescription')}
|
||||
</p>
|
||||
<input
|
||||
type="number"
|
||||
value={tempClusterConfig.coordinator.stickySessionTimeout || 3600000}
|
||||
onChange={(e) => {
|
||||
const stickySessionTimeout = parseInt(e.target.value);
|
||||
setTempClusterConfig((prev) => ({
|
||||
...prev,
|
||||
coordinator: { ...prev.coordinator, stickySessionTimeout },
|
||||
}));
|
||||
}}
|
||||
onBlur={() =>
|
||||
updateClusterConfig({ coordinator: { ...tempClusterConfig.coordinator } })
|
||||
}
|
||||
placeholder={t('settings.stickySessionTimeoutPlaceholder')}
|
||||
className="block w-full py-2 px-3 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm"
|
||||
disabled={loading}
|
||||
min="60000"
|
||||
step="60000"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Sticky Session Configuration */}
|
||||
{tempClusterConfig.enabled &&
|
||||
(tempClusterConfig.mode === 'coordinator' || tempClusterConfig.mode === 'node') && (
|
||||
<div className="p-3 bg-green-50 border border-green-200 rounded-md space-y-3">
|
||||
<h3 className="font-semibold text-gray-800 mb-2">
|
||||
{t('settings.stickySessionConfig')}
|
||||
</h3>
|
||||
|
||||
{/* Enable Sticky Sessions */}
|
||||
<div className="flex items-center justify-between">
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700">
|
||||
{t('settings.stickySessionEnabled')}
|
||||
</label>
|
||||
<p className="text-xs text-gray-500">
|
||||
{t('settings.stickySessionEnabledDescription')}
|
||||
</p>
|
||||
</div>
|
||||
<Switch
|
||||
disabled={loading}
|
||||
checked={tempClusterConfig.stickySession.enabled}
|
||||
onCheckedChange={(checked) => {
|
||||
setTempClusterConfig((prev) => ({
|
||||
...prev,
|
||||
stickySession: { ...prev.stickySession, enabled: checked },
|
||||
}));
|
||||
updateClusterConfig({
|
||||
stickySession: { ...tempClusterConfig.stickySession, enabled: checked },
|
||||
});
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{tempClusterConfig.stickySession.enabled && (
|
||||
<>
|
||||
{/* Session Strategy */}
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.stickySessionStrategy')}
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mb-2">
|
||||
{t('settings.stickySessionStrategyDescription')}
|
||||
</p>
|
||||
<select
|
||||
value={tempClusterConfig.stickySession.strategy}
|
||||
onChange={(e) => {
|
||||
const strategy = e.target.value as
|
||||
| 'consistent-hash'
|
||||
| 'cookie'
|
||||
| 'header';
|
||||
setTempClusterConfig((prev) => ({
|
||||
...prev,
|
||||
stickySession: { ...prev.stickySession, strategy },
|
||||
}));
|
||||
updateClusterConfig({
|
||||
stickySession: { ...tempClusterConfig.stickySession, strategy },
|
||||
});
|
||||
}}
|
||||
className="block w-full py-2 px-3 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm"
|
||||
disabled={loading}
|
||||
>
|
||||
<option value="consistent-hash">
|
||||
{t('settings.stickySessionStrategyConsistentHash')}
|
||||
</option>
|
||||
<option value="cookie">
|
||||
{t('settings.stickySessionStrategyCookie')}
|
||||
</option>
|
||||
<option value="header">
|
||||
{t('settings.stickySessionStrategyHeader')}
|
||||
</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
{/* Cookie Name (only for cookie strategy) */}
|
||||
{tempClusterConfig.stickySession.strategy === 'cookie' && (
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.cookieName')}
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mb-2">
|
||||
{t('settings.cookieNameDescription')}
|
||||
</p>
|
||||
<input
|
||||
type="text"
|
||||
value={tempClusterConfig.stickySession.cookieName || 'MCPHUB_NODE'}
|
||||
onChange={(e) => {
|
||||
const cookieName = e.target.value;
|
||||
setTempClusterConfig((prev) => ({
|
||||
...prev,
|
||||
stickySession: { ...prev.stickySession, cookieName },
|
||||
}));
|
||||
}}
|
||||
onBlur={() =>
|
||||
updateClusterConfig({
|
||||
stickySession: { ...tempClusterConfig.stickySession },
|
||||
})
|
||||
}
|
||||
placeholder={t('settings.cookieNamePlaceholder')}
|
||||
className="block w-full py-2 px-3 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm"
|
||||
disabled={loading}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Header Name (only for header strategy) */}
|
||||
{tempClusterConfig.stickySession.strategy === 'header' && (
|
||||
<div>
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.headerName')}
|
||||
</label>
|
||||
<p className="text-xs text-gray-500 mb-2">
|
||||
{t('settings.headerNameDescription')}
|
||||
</p>
|
||||
<input
|
||||
type="text"
|
||||
value={tempClusterConfig.stickySession.headerName || 'X-MCPHub-Node'}
|
||||
onChange={(e) => {
|
||||
const headerName = e.target.value;
|
||||
setTempClusterConfig((prev) => ({
|
||||
...prev,
|
||||
stickySession: { ...prev.stickySession, headerName },
|
||||
}));
|
||||
}}
|
||||
onBlur={() =>
|
||||
updateClusterConfig({
|
||||
stickySession: { ...tempClusterConfig.stickySession },
|
||||
})
|
||||
}
|
||||
placeholder={t('settings.headerNamePlaceholder')}
|
||||
className="block w-full py-2 px-3 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm"
|
||||
disabled={loading}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</PermissionChecker>
|
||||
|
||||
{/* System Settings */}
|
||||
<div className="bg-white shadow rounded-lg py-4 px-6 mb-6 dashboard-card">
|
||||
<div
|
||||
@@ -1296,10 +794,7 @@ const SettingsPage: React.FC = () => {
|
||||
</PermissionChecker>
|
||||
|
||||
{/* Change Password */}
|
||||
<div
|
||||
className="bg-white shadow rounded-lg py-4 px-6 mb-6 dashboard-card"
|
||||
data-section="password"
|
||||
>
|
||||
<div className="bg-white shadow rounded-lg py-4 px-6 mb-6 dashboard-card" data-section="password">
|
||||
<div
|
||||
className="flex justify-between items-center cursor-pointer"
|
||||
onClick={() => toggleSection('password')}
|
||||
@@ -1369,7 +864,7 @@ const SettingsPage: React.FC = () => {
|
||||
</div>
|
||||
</PermissionChecker>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
)
|
||||
}
|
||||
|
||||
export default SettingsPage;
|
||||
export default SettingsPage
|
||||
|
||||
@@ -574,53 +574,6 @@
|
||||
"systemSettings": "System Settings",
|
||||
"nameSeparatorLabel": "Name Separator",
|
||||
"nameSeparatorDescription": "Character used to separate server name and tool/prompt name (default: -)",
|
||||
"clusterConfig": "Cluster Configuration",
|
||||
"clusterEnabled": "Enable Cluster Mode",
|
||||
"clusterEnabledDescription": "Enable distributed cluster deployment for high availability and scalability",
|
||||
"clusterMode": "Cluster Mode",
|
||||
"clusterModeDescription": "Select the operating mode for this instance",
|
||||
"clusterModeStandalone": "Standalone",
|
||||
"clusterModeNode": "Node",
|
||||
"clusterModeCoordinator": "Coordinator",
|
||||
"nodeConfig": "Node Configuration",
|
||||
"nodeId": "Node ID",
|
||||
"nodeIdDescription": "Unique identifier for this node (auto-generated if not provided)",
|
||||
"nodeIdPlaceholder": "e.g. node-1",
|
||||
"nodeName": "Node Name",
|
||||
"nodeNameDescription": "Human-readable name for this node (defaults to hostname)",
|
||||
"nodeNamePlaceholder": "e.g. mcp-node-1",
|
||||
"coordinatorUrl": "Coordinator URL",
|
||||
"coordinatorUrlDescription": "URL of the coordinator node to register with",
|
||||
"coordinatorUrlPlaceholder": "http://coordinator:3000",
|
||||
"heartbeatInterval": "Heartbeat Interval (ms)",
|
||||
"heartbeatIntervalDescription": "Interval in milliseconds between heartbeat signals (default: 5000)",
|
||||
"heartbeatIntervalPlaceholder": "5000",
|
||||
"registerOnStartup": "Register on Startup",
|
||||
"registerOnStartupDescription": "Automatically register with coordinator when node starts (default: true)",
|
||||
"coordinatorConfig": "Coordinator Configuration",
|
||||
"nodeTimeout": "Node Timeout (ms)",
|
||||
"nodeTimeoutDescription": "Time in milliseconds before marking a node as unhealthy (default: 15000)",
|
||||
"nodeTimeoutPlaceholder": "15000",
|
||||
"cleanupInterval": "Cleanup Interval (ms)",
|
||||
"cleanupIntervalDescription": "Interval for cleaning up inactive nodes in milliseconds (default: 30000)",
|
||||
"cleanupIntervalPlaceholder": "30000",
|
||||
"stickySessionTimeout": "Sticky Session Timeout (ms)",
|
||||
"stickySessionTimeoutDescription": "Session timeout in milliseconds (default: 3600000 = 1 hour)",
|
||||
"stickySessionTimeoutPlaceholder": "3600000",
|
||||
"stickySessionConfig": "Sticky Session Configuration",
|
||||
"stickySessionEnabled": "Enable Sticky Sessions",
|
||||
"stickySessionEnabledDescription": "Enable session affinity to route requests from the same client to the same node",
|
||||
"stickySessionStrategy": "Session Strategy",
|
||||
"stickySessionStrategyDescription": "Strategy for maintaining session affinity",
|
||||
"stickySessionStrategyConsistentHash": "Consistent Hash",
|
||||
"stickySessionStrategyCookie": "Cookie",
|
||||
"stickySessionStrategyHeader": "Header",
|
||||
"cookieName": "Cookie Name",
|
||||
"cookieNameDescription": "Cookie name for cookie-based sticky sessions (default: MCPHUB_NODE)",
|
||||
"cookieNamePlaceholder": "MCPHUB_NODE",
|
||||
"headerName": "Header Name",
|
||||
"headerNameDescription": "Header name for header-based sticky sessions (default: X-MCPHub-Node)",
|
||||
"headerNamePlaceholder": "X-MCPHub-Node",
|
||||
"restartRequired": "Configuration saved. It is recommended to restart the application to ensure all services load the new settings correctly.",
|
||||
"exportMcpSettings": "Export Settings",
|
||||
"mcpSettingsJson": "MCP Settings JSON",
|
||||
|
||||
@@ -574,53 +574,6 @@
|
||||
"systemSettings": "Paramètres système",
|
||||
"nameSeparatorLabel": "Séparateur de noms",
|
||||
"nameSeparatorDescription": "Caractère utilisé pour séparer le nom du serveur et le nom de l'outil/prompt (par défaut : -)",
|
||||
"clusterConfig": "Configuration du cluster",
|
||||
"clusterEnabled": "Activer le mode cluster",
|
||||
"clusterEnabledDescription": "Activer le déploiement en cluster distribué pour la haute disponibilité et l'évolutivité",
|
||||
"clusterMode": "Mode cluster",
|
||||
"clusterModeDescription": "Sélectionnez le mode de fonctionnement pour cette instance",
|
||||
"clusterModeStandalone": "Autonome",
|
||||
"clusterModeNode": "Nœud",
|
||||
"clusterModeCoordinator": "Coordinateur",
|
||||
"nodeConfig": "Configuration du nœud",
|
||||
"nodeId": "ID du nœud",
|
||||
"nodeIdDescription": "Identifiant unique pour ce nœud (généré automatiquement si non fourni)",
|
||||
"nodeIdPlaceholder": "ex. node-1",
|
||||
"nodeName": "Nom du nœud",
|
||||
"nodeNameDescription": "Nom lisible par l'homme pour ce nœud (par défaut, nom d'hôte)",
|
||||
"nodeNamePlaceholder": "ex. mcp-node-1",
|
||||
"coordinatorUrl": "URL du coordinateur",
|
||||
"coordinatorUrlDescription": "URL du nœud coordinateur auquel s'inscrire",
|
||||
"coordinatorUrlPlaceholder": "http://coordinator:3000",
|
||||
"heartbeatInterval": "Intervalle de battement de cœur (ms)",
|
||||
"heartbeatIntervalDescription": "Intervalle en millisecondes entre les signaux de battement de cœur (par défaut : 5000)",
|
||||
"heartbeatIntervalPlaceholder": "5000",
|
||||
"registerOnStartup": "S'inscrire au démarrage",
|
||||
"registerOnStartupDescription": "S'inscrire automatiquement auprès du coordinateur au démarrage du nœud (par défaut : true)",
|
||||
"coordinatorConfig": "Configuration du coordinateur",
|
||||
"nodeTimeout": "Délai d'expiration du nœud (ms)",
|
||||
"nodeTimeoutDescription": "Temps en millisecondes avant de marquer un nœud comme non sain (par défaut : 15000)",
|
||||
"nodeTimeoutPlaceholder": "15000",
|
||||
"cleanupInterval": "Intervalle de nettoyage (ms)",
|
||||
"cleanupIntervalDescription": "Intervalle de nettoyage des nœuds inactifs en millisecondes (par défaut : 30000)",
|
||||
"cleanupIntervalPlaceholder": "30000",
|
||||
"stickySessionTimeout": "Délai d'expiration de la session persistante (ms)",
|
||||
"stickySessionTimeoutDescription": "Délai d'expiration de la session en millisecondes (par défaut : 3600000 = 1 heure)",
|
||||
"stickySessionTimeoutPlaceholder": "3600000",
|
||||
"stickySessionConfig": "Configuration de la session persistante",
|
||||
"stickySessionEnabled": "Activer les sessions persistantes",
|
||||
"stickySessionEnabledDescription": "Activer l'affinité de session pour acheminer les requêtes du même client vers le même nœud",
|
||||
"stickySessionStrategy": "Stratégie de session",
|
||||
"stickySessionStrategyDescription": "Stratégie pour maintenir l'affinité de session",
|
||||
"stickySessionStrategyConsistentHash": "Hachage cohérent",
|
||||
"stickySessionStrategyCookie": "Cookie",
|
||||
"stickySessionStrategyHeader": "En-tête",
|
||||
"cookieName": "Nom du cookie",
|
||||
"cookieNameDescription": "Nom du cookie pour les sessions persistantes basées sur les cookies (par défaut : MCPHUB_NODE)",
|
||||
"cookieNamePlaceholder": "MCPHUB_NODE",
|
||||
"headerName": "Nom de l'en-tête",
|
||||
"headerNameDescription": "Nom de l'en-tête pour les sessions persistantes basées sur les en-têtes (par défaut : X-MCPHub-Node)",
|
||||
"headerNamePlaceholder": "X-MCPHub-Node",
|
||||
"restartRequired": "Configuration enregistrée. Il est recommandé de redémarrer l'application pour s'assurer que tous les services chargent correctement les nouveaux paramètres.",
|
||||
"exportMcpSettings": "Exporter les paramètres",
|
||||
"mcpSettingsJson": "JSON des paramètres MCP",
|
||||
|
||||
@@ -576,53 +576,6 @@
|
||||
"systemSettings": "系统设置",
|
||||
"nameSeparatorLabel": "名称分隔符",
|
||||
"nameSeparatorDescription": "用于分隔服务器名称和工具/提示名称(默认:-)",
|
||||
"clusterConfig": "集群配置",
|
||||
"clusterEnabled": "启用集群模式",
|
||||
"clusterEnabledDescription": "启用分布式集群部署,实现高可用和可扩展性",
|
||||
"clusterMode": "集群模式",
|
||||
"clusterModeDescription": "选择此实例的运行模式",
|
||||
"clusterModeStandalone": "独立模式",
|
||||
"clusterModeNode": "节点模式",
|
||||
"clusterModeCoordinator": "协调器模式",
|
||||
"nodeConfig": "节点配置",
|
||||
"nodeId": "节点 ID",
|
||||
"nodeIdDescription": "节点的唯一标识符(如果未提供则自动生成)",
|
||||
"nodeIdPlaceholder": "例如: node-1",
|
||||
"nodeName": "节点名称",
|
||||
"nodeNameDescription": "节点的可读名称(默认为主机名)",
|
||||
"nodeNamePlaceholder": "例如: mcp-node-1",
|
||||
"coordinatorUrl": "协调器地址",
|
||||
"coordinatorUrlDescription": "要注册的协调器节点的地址",
|
||||
"coordinatorUrlPlaceholder": "http://coordinator:3000",
|
||||
"heartbeatInterval": "心跳间隔(毫秒)",
|
||||
"heartbeatIntervalDescription": "心跳信号的发送间隔,单位为毫秒(默认:5000)",
|
||||
"heartbeatIntervalPlaceholder": "5000",
|
||||
"registerOnStartup": "启动时注册",
|
||||
"registerOnStartupDescription": "节点启动时自动向协调器注册(默认:true)",
|
||||
"coordinatorConfig": "协调器配置",
|
||||
"nodeTimeout": "节点超时(毫秒)",
|
||||
"nodeTimeoutDescription": "将节点标记为不健康之前的超时时间,单位为毫秒(默认:15000)",
|
||||
"nodeTimeoutPlaceholder": "15000",
|
||||
"cleanupInterval": "清理间隔(毫秒)",
|
||||
"cleanupIntervalDescription": "清理非活动节点的间隔时间,单位为毫秒(默认:30000)",
|
||||
"cleanupIntervalPlaceholder": "30000",
|
||||
"stickySessionTimeout": "会话超时(毫秒)",
|
||||
"stickySessionTimeoutDescription": "会话的超时时间,单位为毫秒(默认:3600000 = 1 小时)",
|
||||
"stickySessionTimeoutPlaceholder": "3600000",
|
||||
"stickySessionConfig": "会话保持配置",
|
||||
"stickySessionEnabled": "启用会话保持",
|
||||
"stickySessionEnabledDescription": "启用会话亲和性,将来自同一客户端的请求路由到同一节点",
|
||||
"stickySessionStrategy": "会话策略",
|
||||
"stickySessionStrategyDescription": "维护会话亲和性的策略",
|
||||
"stickySessionStrategyConsistentHash": "一致性哈希",
|
||||
"stickySessionStrategyCookie": "Cookie",
|
||||
"stickySessionStrategyHeader": "Header",
|
||||
"cookieName": "Cookie 名称",
|
||||
"cookieNameDescription": "基于 Cookie 的会话保持使用的 Cookie 名称(默认:MCPHUB_NODE)",
|
||||
"cookieNamePlaceholder": "MCPHUB_NODE",
|
||||
"headerName": "Header 名称",
|
||||
"headerNameDescription": "基于 Header 的会话保持使用的 Header 名称(默认:X-MCPHub-Node)",
|
||||
"headerNamePlaceholder": "X-MCPHub-Node",
|
||||
"restartRequired": "配置已保存。为确保所有服务正确加载新设置,建议重启应用。",
|
||||
"exportMcpSettings": "导出配置",
|
||||
"mcpSettingsJson": "MCP 配置 JSON",
|
||||
|
||||
@@ -1,240 +0,0 @@
|
||||
/**
|
||||
* Cluster Controller
|
||||
*
|
||||
* Handles cluster-related API endpoints:
|
||||
* - Node registration
|
||||
* - Heartbeat updates
|
||||
* - Cluster status queries
|
||||
* - Session affinity management
|
||||
*/
|
||||
|
||||
import { Request, Response } from 'express';
|
||||
import {
|
||||
getClusterMode,
|
||||
isClusterEnabled,
|
||||
getCurrentNodeId,
|
||||
registerNode,
|
||||
updateNodeHeartbeat,
|
||||
getActiveNodes,
|
||||
getAllNodes,
|
||||
getServerReplicas,
|
||||
getSessionAffinity,
|
||||
getClusterStats,
|
||||
} from '../services/clusterService.js';
|
||||
import { ClusterNode } from '../types/index.js';
|
||||
|
||||
/**
|
||||
* Get cluster status
|
||||
* GET /api/cluster/status
|
||||
*/
|
||||
export const getClusterStatus = (_req: Request, res: Response): void => {
|
||||
try {
|
||||
const enabled = isClusterEnabled();
|
||||
const mode = getClusterMode();
|
||||
const nodeId = getCurrentNodeId();
|
||||
const stats = getClusterStats();
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: {
|
||||
enabled,
|
||||
mode,
|
||||
nodeId,
|
||||
stats,
|
||||
},
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Error getting cluster status:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
message: 'Failed to get cluster status',
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Register a node (coordinator only)
|
||||
* POST /api/cluster/register
|
||||
*/
|
||||
export const registerNodeEndpoint = (req: Request, res: Response): void => {
|
||||
try {
|
||||
const mode = getClusterMode();
|
||||
|
||||
if (mode !== 'coordinator') {
|
||||
res.status(403).json({
|
||||
success: false,
|
||||
message: 'This endpoint is only available on coordinator nodes',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const nodeInfo: ClusterNode = req.body;
|
||||
|
||||
// Validate required fields
|
||||
if (!nodeInfo.id || !nodeInfo.name || !nodeInfo.url) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
message: 'Missing required fields: id, name, url',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
registerNode(nodeInfo);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
message: 'Node registered successfully',
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Error registering node:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
message: 'Failed to register node',
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Update node heartbeat (coordinator only)
|
||||
* POST /api/cluster/heartbeat
|
||||
*/
|
||||
export const updateHeartbeat = (req: Request, res: Response): void => {
|
||||
try {
|
||||
const mode = getClusterMode();
|
||||
|
||||
if (mode !== 'coordinator') {
|
||||
res.status(403).json({
|
||||
success: false,
|
||||
message: 'This endpoint is only available on coordinator nodes',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const { id, servers } = req.body;
|
||||
|
||||
if (!id) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
message: 'Missing required field: id',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
updateNodeHeartbeat(id, servers || []);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
message: 'Heartbeat updated successfully',
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Error updating heartbeat:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
message: 'Failed to update heartbeat',
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Get all nodes (coordinator only)
|
||||
* GET /api/cluster/nodes
|
||||
*/
|
||||
export const getNodes = (req: Request, res: Response): void => {
|
||||
try {
|
||||
const mode = getClusterMode();
|
||||
|
||||
if (mode !== 'coordinator') {
|
||||
res.status(403).json({
|
||||
success: false,
|
||||
message: 'This endpoint is only available on coordinator nodes',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const activeOnly = req.query.active === 'true';
|
||||
const nodes = activeOnly ? getActiveNodes() : getAllNodes();
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: nodes,
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Error getting nodes:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
message: 'Failed to get nodes',
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Get server replicas (coordinator only)
|
||||
* GET /api/cluster/servers/:serverId/replicas
|
||||
*/
|
||||
export const getReplicasForServer = (req: Request, res: Response): void => {
|
||||
try {
|
||||
const mode = getClusterMode();
|
||||
|
||||
if (mode !== 'coordinator') {
|
||||
res.status(403).json({
|
||||
success: false,
|
||||
message: 'This endpoint is only available on coordinator nodes',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const { serverId } = req.params;
|
||||
const replicas = getServerReplicas(serverId);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: replicas,
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Error getting server replicas:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
message: 'Failed to get server replicas',
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Get session affinity information (coordinator only)
|
||||
* GET /api/cluster/sessions/:sessionId
|
||||
*/
|
||||
export const getSessionAffinityInfo = (req: Request, res: Response): void => {
|
||||
try {
|
||||
const mode = getClusterMode();
|
||||
|
||||
if (mode !== 'coordinator') {
|
||||
res.status(403).json({
|
||||
success: false,
|
||||
message: 'This endpoint is only available on coordinator nodes',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const { sessionId } = req.params;
|
||||
const affinity = getSessionAffinity(sessionId);
|
||||
|
||||
if (!affinity) {
|
||||
res.status(404).json({
|
||||
success: false,
|
||||
message: 'Session affinity not found',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: affinity,
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Error getting session affinity:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
message: 'Failed to get session affinity',
|
||||
});
|
||||
}
|
||||
};
|
||||
@@ -508,7 +508,7 @@ export const updateToolDescription = async (req: Request, res: Response): Promis
|
||||
|
||||
export const updateSystemConfig = (req: Request, res: Response): void => {
|
||||
try {
|
||||
const { routing, install, smartRouting, mcpRouter, nameSeparator, cluster } = req.body;
|
||||
const { routing, install, smartRouting, mcpRouter, nameSeparator } = req.body;
|
||||
const currentUser = (req as any).user;
|
||||
|
||||
if (
|
||||
@@ -533,8 +533,7 @@ export const updateSystemConfig = (req: Request, res: Response): void => {
|
||||
typeof mcpRouter.referer !== 'string' &&
|
||||
typeof mcpRouter.title !== 'string' &&
|
||||
typeof mcpRouter.baseUrl !== 'string')) &&
|
||||
typeof nameSeparator !== 'string' &&
|
||||
!cluster
|
||||
typeof nameSeparator !== 'string'
|
||||
) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
@@ -611,13 +610,6 @@ export const updateSystemConfig = (req: Request, res: Response): void => {
|
||||
};
|
||||
}
|
||||
|
||||
if (!settings.systemConfig.cluster) {
|
||||
settings.systemConfig.cluster = {
|
||||
enabled: false,
|
||||
mode: 'standalone',
|
||||
};
|
||||
}
|
||||
|
||||
if (routing) {
|
||||
if (typeof routing.enableGlobalRoute === 'boolean') {
|
||||
settings.systemConfig.routing.enableGlobalRoute = routing.enableGlobalRoute;
|
||||
@@ -727,88 +719,6 @@ export const updateSystemConfig = (req: Request, res: Response): void => {
|
||||
settings.systemConfig.nameSeparator = nameSeparator;
|
||||
}
|
||||
|
||||
if (cluster) {
|
||||
if (typeof cluster.enabled === 'boolean') {
|
||||
settings.systemConfig.cluster.enabled = cluster.enabled;
|
||||
}
|
||||
if (
|
||||
typeof cluster.mode === 'string' &&
|
||||
['standalone', 'node', 'coordinator'].includes(cluster.mode)
|
||||
) {
|
||||
settings.systemConfig.cluster.mode = cluster.mode as 'standalone' | 'node' | 'coordinator';
|
||||
}
|
||||
|
||||
// Node configuration
|
||||
if (cluster.node) {
|
||||
if (!settings.systemConfig.cluster.node) {
|
||||
settings.systemConfig.cluster.node = {
|
||||
coordinatorUrl: '',
|
||||
};
|
||||
}
|
||||
if (typeof cluster.node.id === 'string') {
|
||||
settings.systemConfig.cluster.node.id = cluster.node.id;
|
||||
}
|
||||
if (typeof cluster.node.name === 'string') {
|
||||
settings.systemConfig.cluster.node.name = cluster.node.name;
|
||||
}
|
||||
if (typeof cluster.node.coordinatorUrl === 'string') {
|
||||
settings.systemConfig.cluster.node.coordinatorUrl = cluster.node.coordinatorUrl;
|
||||
}
|
||||
if (typeof cluster.node.heartbeatInterval === 'number') {
|
||||
settings.systemConfig.cluster.node.heartbeatInterval = cluster.node.heartbeatInterval;
|
||||
}
|
||||
if (typeof cluster.node.registerOnStartup === 'boolean') {
|
||||
settings.systemConfig.cluster.node.registerOnStartup = cluster.node.registerOnStartup;
|
||||
}
|
||||
}
|
||||
|
||||
// Coordinator configuration
|
||||
if (cluster.coordinator) {
|
||||
if (!settings.systemConfig.cluster.coordinator) {
|
||||
settings.systemConfig.cluster.coordinator = {};
|
||||
}
|
||||
if (typeof cluster.coordinator.nodeTimeout === 'number') {
|
||||
settings.systemConfig.cluster.coordinator.nodeTimeout = cluster.coordinator.nodeTimeout;
|
||||
}
|
||||
if (typeof cluster.coordinator.cleanupInterval === 'number') {
|
||||
settings.systemConfig.cluster.coordinator.cleanupInterval =
|
||||
cluster.coordinator.cleanupInterval;
|
||||
}
|
||||
if (typeof cluster.coordinator.stickySessionTimeout === 'number') {
|
||||
settings.systemConfig.cluster.coordinator.stickySessionTimeout =
|
||||
cluster.coordinator.stickySessionTimeout;
|
||||
}
|
||||
}
|
||||
|
||||
// Sticky session configuration
|
||||
if (cluster.stickySession) {
|
||||
if (!settings.systemConfig.cluster.stickySession) {
|
||||
settings.systemConfig.cluster.stickySession = {
|
||||
enabled: true,
|
||||
strategy: 'consistent-hash',
|
||||
};
|
||||
}
|
||||
if (typeof cluster.stickySession.enabled === 'boolean') {
|
||||
settings.systemConfig.cluster.stickySession.enabled = cluster.stickySession.enabled;
|
||||
}
|
||||
if (
|
||||
typeof cluster.stickySession.strategy === 'string' &&
|
||||
['consistent-hash', 'cookie', 'header'].includes(cluster.stickySession.strategy)
|
||||
) {
|
||||
settings.systemConfig.cluster.stickySession.strategy = cluster.stickySession.strategy as
|
||||
| 'consistent-hash'
|
||||
| 'cookie'
|
||||
| 'header';
|
||||
}
|
||||
if (typeof cluster.stickySession.cookieName === 'string') {
|
||||
settings.systemConfig.cluster.stickySession.cookieName = cluster.stickySession.cookieName;
|
||||
}
|
||||
if (typeof cluster.stickySession.headerName === 'string') {
|
||||
settings.systemConfig.cluster.stickySession.headerName = cluster.stickySession.headerName;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (saveSettings(settings, currentUser)) {
|
||||
res.json({
|
||||
success: true,
|
||||
|
||||
@@ -1,176 +0,0 @@
|
||||
/**
|
||||
* Cluster Routing Middleware
|
||||
*
|
||||
* Handles routing of MCP requests in cluster mode:
|
||||
* - Determines target node based on session affinity
|
||||
* - Proxies requests to appropriate nodes
|
||||
* - Maintains sticky sessions
|
||||
*/
|
||||
|
||||
import { Request, Response, NextFunction } from 'express';
|
||||
import axios from 'axios';
|
||||
import {
|
||||
isClusterEnabled,
|
||||
getClusterMode,
|
||||
getNodeForSession,
|
||||
getCurrentNodeId,
|
||||
} from '../services/clusterService.js';
|
||||
|
||||
/**
|
||||
* Cluster routing middleware for SSE connections
|
||||
*/
|
||||
export const clusterSseRouting = async (
|
||||
req: Request,
|
||||
res: Response,
|
||||
next: NextFunction,
|
||||
): Promise<void> => {
|
||||
// If cluster is not enabled or we're in standalone mode, proceed normally
|
||||
if (!isClusterEnabled() || getClusterMode() === 'standalone') {
|
||||
next();
|
||||
return;
|
||||
}
|
||||
|
||||
// Coordinator should handle all requests normally
|
||||
if (getClusterMode() === 'coordinator') {
|
||||
// For coordinator, we need to route to appropriate node
|
||||
await routeToNode(req, res, next);
|
||||
return;
|
||||
}
|
||||
|
||||
// For regular nodes, proceed normally (they handle their own servers)
|
||||
next();
|
||||
};
|
||||
|
||||
/**
|
||||
* Cluster routing middleware for MCP HTTP requests
|
||||
*/
|
||||
export const clusterMcpRouting = async (
|
||||
req: Request,
|
||||
res: Response,
|
||||
next: NextFunction,
|
||||
): Promise<void> => {
|
||||
// If cluster is not enabled or we're in standalone mode, proceed normally
|
||||
if (!isClusterEnabled() || getClusterMode() === 'standalone') {
|
||||
next();
|
||||
return;
|
||||
}
|
||||
|
||||
// Coordinator should route requests to appropriate nodes
|
||||
if (getClusterMode() === 'coordinator') {
|
||||
await routeToNode(req, res, next);
|
||||
return;
|
||||
}
|
||||
|
||||
// For regular nodes, proceed normally
|
||||
next();
|
||||
};
|
||||
|
||||
/**
|
||||
* Route request to appropriate node based on session affinity
|
||||
*/
|
||||
const routeToNode = async (
|
||||
req: Request,
|
||||
res: Response,
|
||||
next: NextFunction,
|
||||
): Promise<void> => {
|
||||
try {
|
||||
// Extract session ID from headers or generate new one
|
||||
const sessionId =
|
||||
(req.headers['mcp-session-id'] as string) ||
|
||||
(req.query.sessionId as string) ||
|
||||
generateSessionId(req);
|
||||
|
||||
// Determine target node
|
||||
const group = req.params.group;
|
||||
const targetNode = getNodeForSession(sessionId, group, req.headers);
|
||||
|
||||
if (!targetNode) {
|
||||
// No available nodes, return error
|
||||
res.status(503).json({
|
||||
success: false,
|
||||
message: 'No available nodes to handle request',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Check if this is the current node
|
||||
const currentNodeId = getCurrentNodeId();
|
||||
if (currentNodeId && targetNode.id === currentNodeId) {
|
||||
// Handle locally
|
||||
next();
|
||||
return;
|
||||
}
|
||||
|
||||
// Proxy request to target node
|
||||
await proxyRequest(req, res, targetNode.url);
|
||||
} catch (error) {
|
||||
console.error('Error in cluster routing:', error);
|
||||
next(error);
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Generate session ID from request
|
||||
*/
|
||||
const generateSessionId = (req: Request): string => {
|
||||
// Use IP address and user agent as seed for consistent hashing
|
||||
const seed = `${req.ip}-${req.get('user-agent') || 'unknown'}`;
|
||||
return Buffer.from(seed).toString('base64');
|
||||
};
|
||||
|
||||
/**
|
||||
* Proxy request to another node
|
||||
*/
|
||||
const proxyRequest = async (
|
||||
req: Request,
|
||||
res: Response,
|
||||
targetUrl: string,
|
||||
): Promise<void> => {
|
||||
try {
|
||||
// Build target URL
|
||||
const url = new URL(req.originalUrl || req.url, targetUrl);
|
||||
|
||||
// Prepare headers (excluding host and connection headers)
|
||||
const headers: Record<string, string> = {};
|
||||
for (const [key, value] of Object.entries(req.headers)) {
|
||||
if (
|
||||
key.toLowerCase() !== 'host' &&
|
||||
key.toLowerCase() !== 'connection' &&
|
||||
value
|
||||
) {
|
||||
headers[key] = Array.isArray(value) ? value[0] : value;
|
||||
}
|
||||
}
|
||||
|
||||
// Forward request to target node
|
||||
const response = await axios({
|
||||
method: req.method,
|
||||
url: url.toString(),
|
||||
headers,
|
||||
data: req.body,
|
||||
responseType: 'stream',
|
||||
timeout: 30000,
|
||||
validateStatus: () => true, // Don't throw on any status
|
||||
});
|
||||
|
||||
// Forward response headers
|
||||
for (const [key, value] of Object.entries(response.headers)) {
|
||||
if (
|
||||
key.toLowerCase() !== 'connection' &&
|
||||
key.toLowerCase() !== 'transfer-encoding'
|
||||
) {
|
||||
res.setHeader(key, value as string);
|
||||
}
|
||||
}
|
||||
|
||||
// Forward status code and stream response
|
||||
res.status(response.status);
|
||||
response.data.pipe(res);
|
||||
} catch (error) {
|
||||
console.error('Error proxying request:', error);
|
||||
res.status(502).json({
|
||||
success: false,
|
||||
message: 'Failed to proxy request to target node',
|
||||
});
|
||||
}
|
||||
};
|
||||
@@ -80,14 +80,6 @@ import {
|
||||
getGroupOpenAPISpec,
|
||||
} from '../controllers/openApiController.js';
|
||||
import { handleOAuthCallback } from '../controllers/oauthCallbackController.js';
|
||||
import {
|
||||
getClusterStatus,
|
||||
registerNodeEndpoint,
|
||||
updateHeartbeat,
|
||||
getNodes,
|
||||
getReplicasForServer,
|
||||
getSessionAffinityInfo,
|
||||
} from '../controllers/clusterController.js';
|
||||
import { auth } from '../middlewares/auth.js';
|
||||
|
||||
const router = express.Router();
|
||||
@@ -175,14 +167,6 @@ export const initRoutes = (app: express.Application): void => {
|
||||
router.delete('/logs', clearLogs);
|
||||
router.get('/logs/stream', streamLogs);
|
||||
|
||||
// Cluster management routes
|
||||
router.get('/cluster/status', getClusterStatus);
|
||||
router.post('/cluster/register', registerNodeEndpoint);
|
||||
router.post('/cluster/heartbeat', updateHeartbeat);
|
||||
router.get('/cluster/nodes', getNodes);
|
||||
router.get('/cluster/servers/:serverId/replicas', getReplicasForServer);
|
||||
router.get('/cluster/sessions/:sessionId', getSessionAffinityInfo);
|
||||
|
||||
// MCP settings export route
|
||||
router.get('/mcp-settings/export', getMcpSettingsJson);
|
||||
|
||||
|
||||
@@ -15,11 +15,9 @@ import {
|
||||
} from './services/sseService.js';
|
||||
import { initializeDefaultUser } from './models/User.js';
|
||||
import { sseUserContextMiddleware } from './middlewares/userContext.js';
|
||||
import { clusterSseRouting, clusterMcpRouting } from './middlewares/clusterRouting.js';
|
||||
import { findPackageRoot } from './utils/path.js';
|
||||
import { getCurrentModuleDir } from './utils/moduleDir.js';
|
||||
import { initOAuthProvider, getOAuthRouter } from './services/oauthService.js';
|
||||
import { initClusterService, shutdownClusterService } from './services/clusterService.js';
|
||||
|
||||
/**
|
||||
* Get the directory of the current module
|
||||
@@ -75,74 +73,53 @@ export class AppServer {
|
||||
initRoutes(this.app);
|
||||
console.log('Server initialized successfully');
|
||||
|
||||
// Initialize cluster service
|
||||
await initClusterService();
|
||||
|
||||
initUpstreamServers()
|
||||
.then(() => {
|
||||
console.log('MCP server initialized successfully');
|
||||
|
||||
// Original routes (global and group-based) with cluster routing
|
||||
this.app.get(
|
||||
`${this.basePath}/sse/:group(.*)?`,
|
||||
sseUserContextMiddleware,
|
||||
clusterSseRouting,
|
||||
(req, res) => handleSseConnection(req, res),
|
||||
);
|
||||
this.app.post(
|
||||
`${this.basePath}/messages`,
|
||||
sseUserContextMiddleware,
|
||||
clusterSseRouting,
|
||||
handleSseMessage,
|
||||
// Original routes (global and group-based)
|
||||
this.app.get(`${this.basePath}/sse/:group(.*)?`, sseUserContextMiddleware, (req, res) =>
|
||||
handleSseConnection(req, res),
|
||||
);
|
||||
this.app.post(`${this.basePath}/messages`, sseUserContextMiddleware, handleSseMessage);
|
||||
this.app.post(
|
||||
`${this.basePath}/mcp/:group(.*)?`,
|
||||
sseUserContextMiddleware,
|
||||
clusterMcpRouting,
|
||||
handleMcpPostRequest,
|
||||
);
|
||||
this.app.get(
|
||||
`${this.basePath}/mcp/:group(.*)?`,
|
||||
sseUserContextMiddleware,
|
||||
clusterMcpRouting,
|
||||
handleMcpOtherRequest,
|
||||
);
|
||||
this.app.delete(
|
||||
`${this.basePath}/mcp/:group(.*)?`,
|
||||
sseUserContextMiddleware,
|
||||
clusterMcpRouting,
|
||||
handleMcpOtherRequest,
|
||||
);
|
||||
|
||||
// User-scoped routes with user context middleware and cluster routing
|
||||
this.app.get(
|
||||
`${this.basePath}/:user/sse/:group(.*)?`,
|
||||
sseUserContextMiddleware,
|
||||
clusterSseRouting,
|
||||
(req, res) => handleSseConnection(req, res),
|
||||
// User-scoped routes with user context middleware
|
||||
this.app.get(`${this.basePath}/:user/sse/:group(.*)?`, sseUserContextMiddleware, (req, res) =>
|
||||
handleSseConnection(req, res),
|
||||
);
|
||||
this.app.post(
|
||||
`${this.basePath}/:user/messages`,
|
||||
sseUserContextMiddleware,
|
||||
clusterSseRouting,
|
||||
handleSseMessage,
|
||||
);
|
||||
this.app.post(
|
||||
`${this.basePath}/:user/mcp/:group(.*)?`,
|
||||
sseUserContextMiddleware,
|
||||
clusterMcpRouting,
|
||||
handleMcpPostRequest,
|
||||
);
|
||||
this.app.get(
|
||||
`${this.basePath}/:user/mcp/:group(.*)?`,
|
||||
sseUserContextMiddleware,
|
||||
clusterMcpRouting,
|
||||
handleMcpOtherRequest,
|
||||
);
|
||||
this.app.delete(
|
||||
`${this.basePath}/:user/mcp/:group(.*)?`,
|
||||
sseUserContextMiddleware,
|
||||
clusterMcpRouting,
|
||||
handleMcpOtherRequest,
|
||||
);
|
||||
})
|
||||
@@ -214,11 +191,6 @@ export class AppServer {
|
||||
return this.app;
|
||||
}
|
||||
|
||||
shutdown(): void {
|
||||
console.log('Shutting down cluster service...');
|
||||
shutdownClusterService();
|
||||
}
|
||||
|
||||
// Helper method to find frontend dist path in different environments
|
||||
private findFrontendDistPath(): string | null {
|
||||
// Debug flag for detailed logging
|
||||
|
||||
@@ -1,538 +0,0 @@
|
||||
/**
|
||||
* Cluster Service
|
||||
*
|
||||
* Manages cluster functionality including:
|
||||
* - Node registration and discovery
|
||||
* - Health checking and heartbeats
|
||||
* - Session affinity (sticky sessions)
|
||||
* - Load balancing across replicas
|
||||
*/
|
||||
|
||||
import { randomUUID } from 'crypto';
|
||||
import os from 'os';
|
||||
import crypto from 'crypto';
|
||||
import axios from 'axios';
|
||||
import {
|
||||
ClusterNode,
|
||||
ClusterConfig,
|
||||
ServerReplica,
|
||||
SessionAffinity,
|
||||
} from '../types/index.js';
|
||||
import { loadSettings } from '../config/index.js';
|
||||
|
||||
// In-memory storage for cluster state
|
||||
const nodes: Map<string, ClusterNode> = new Map();
|
||||
const sessionAffinities: Map<string, SessionAffinity> = new Map();
|
||||
const serverReplicas: Map<string, ServerReplica[]> = new Map();
|
||||
let currentNodeId: string | null = null;
|
||||
let heartbeatIntervalId: NodeJS.Timeout | null = null;
|
||||
let cleanupIntervalId: NodeJS.Timeout | null = null;
|
||||
|
||||
/**
|
||||
* Get cluster configuration from settings
|
||||
*/
|
||||
export const getClusterConfig = (): ClusterConfig | null => {
|
||||
const settings = loadSettings();
|
||||
return settings.systemConfig?.cluster || null;
|
||||
};
|
||||
|
||||
/**
|
||||
* Check if cluster mode is enabled
|
||||
*/
|
||||
export const isClusterEnabled = (): boolean => {
|
||||
const config = getClusterConfig();
|
||||
return config?.enabled === true;
|
||||
};
|
||||
|
||||
/**
|
||||
* Get the current node's operating mode
|
||||
*/
|
||||
export const getClusterMode = (): 'standalone' | 'node' | 'coordinator' => {
|
||||
const config = getClusterConfig();
|
||||
if (!config?.enabled) {
|
||||
return 'standalone';
|
||||
}
|
||||
return config.mode || 'standalone';
|
||||
};
|
||||
|
||||
/**
|
||||
* Get the current node ID
|
||||
*/
|
||||
export const getCurrentNodeId = (): string | null => {
|
||||
return currentNodeId;
|
||||
};
|
||||
|
||||
/**
|
||||
* Initialize cluster service based on configuration
|
||||
*/
|
||||
export const initClusterService = async (): Promise<void> => {
|
||||
const config = getClusterConfig();
|
||||
|
||||
if (!config?.enabled) {
|
||||
console.log('Cluster mode is disabled');
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`Initializing cluster service in ${config.mode} mode`);
|
||||
|
||||
switch (config.mode) {
|
||||
case 'node':
|
||||
await initAsNode(config);
|
||||
break;
|
||||
case 'coordinator':
|
||||
await initAsCoordinator(config);
|
||||
break;
|
||||
case 'standalone':
|
||||
default:
|
||||
console.log('Running in standalone mode');
|
||||
break;
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Initialize this instance as a cluster node
|
||||
*/
|
||||
const initAsNode = async (config: ClusterConfig): Promise<void> => {
|
||||
if (!config.node) {
|
||||
throw new Error('Node configuration is required for cluster node mode');
|
||||
}
|
||||
|
||||
// Generate or use provided node ID
|
||||
currentNodeId = config.node.id || randomUUID();
|
||||
|
||||
const nodeName = config.node.name || os.hostname();
|
||||
const port = process.env.PORT || 3000;
|
||||
|
||||
console.log(`Initializing as cluster node: ${nodeName} (${currentNodeId})`);
|
||||
|
||||
// Register with coordinator if enabled
|
||||
if (config.node.registerOnStartup !== false) {
|
||||
await registerWithCoordinator(config, nodeName, Number(port));
|
||||
}
|
||||
|
||||
// Start heartbeat to coordinator
|
||||
const heartbeatInterval = config.node.heartbeatInterval || 5000;
|
||||
heartbeatIntervalId = setInterval(async () => {
|
||||
await sendHeartbeat(config, nodeName, Number(port));
|
||||
}, heartbeatInterval);
|
||||
|
||||
console.log(`Node registered with coordinator at ${config.node.coordinatorUrl}`);
|
||||
};
|
||||
|
||||
/**
|
||||
* Initialize this instance as the coordinator
|
||||
*/
|
||||
const initAsCoordinator = async (config: ClusterConfig): Promise<void> => {
|
||||
currentNodeId = 'coordinator';
|
||||
|
||||
console.log('Initializing as cluster coordinator');
|
||||
|
||||
// Start cleanup interval for inactive nodes
|
||||
const cleanupInterval = config.coordinator?.cleanupInterval || 30000;
|
||||
cleanupIntervalId = setInterval(() => {
|
||||
cleanupInactiveNodes(config);
|
||||
}, cleanupInterval);
|
||||
|
||||
console.log('Cluster coordinator initialized');
|
||||
};
|
||||
|
||||
/**
|
||||
* Register this node with the coordinator
|
||||
*/
|
||||
const registerWithCoordinator = async (
|
||||
config: ClusterConfig,
|
||||
nodeName: string,
|
||||
port: number,
|
||||
): Promise<void> => {
|
||||
if (!config.node?.coordinatorUrl) {
|
||||
return;
|
||||
}
|
||||
|
||||
const hostname = os.hostname();
|
||||
const nodeUrl = `http://${hostname}:${port}`;
|
||||
|
||||
// Get list of local MCP servers
|
||||
const settings = loadSettings();
|
||||
const servers = Object.keys(settings.mcpServers || {});
|
||||
|
||||
const nodeInfo: ClusterNode = {
|
||||
id: currentNodeId!,
|
||||
name: nodeName,
|
||||
host: hostname,
|
||||
port,
|
||||
url: nodeUrl,
|
||||
status: 'active',
|
||||
lastHeartbeat: Date.now(),
|
||||
servers,
|
||||
};
|
||||
|
||||
try {
|
||||
await axios.post(
|
||||
`${config.node.coordinatorUrl}/api/cluster/register`,
|
||||
nodeInfo,
|
||||
{ timeout: 5000 }
|
||||
);
|
||||
console.log('Successfully registered with coordinator');
|
||||
} catch (error) {
|
||||
console.error('Failed to register with coordinator:', error);
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Send heartbeat to coordinator
|
||||
*/
|
||||
const sendHeartbeat = async (
|
||||
config: ClusterConfig,
|
||||
nodeName: string,
|
||||
port: number,
|
||||
): Promise<void> => {
|
||||
if (!config.node?.coordinatorUrl || !currentNodeId) {
|
||||
return;
|
||||
}
|
||||
|
||||
const hostname = os.hostname();
|
||||
const settings = loadSettings();
|
||||
const servers = Object.keys(settings.mcpServers || {});
|
||||
|
||||
try {
|
||||
await axios.post(
|
||||
`${config.node.coordinatorUrl}/api/cluster/heartbeat`,
|
||||
{
|
||||
id: currentNodeId,
|
||||
name: nodeName,
|
||||
host: hostname,
|
||||
port,
|
||||
servers,
|
||||
timestamp: Date.now(),
|
||||
},
|
||||
{ timeout: 5000 }
|
||||
);
|
||||
} catch (error) {
|
||||
console.warn('Failed to send heartbeat to coordinator:', error);
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Cleanup inactive nodes (coordinator only)
|
||||
*/
|
||||
const cleanupInactiveNodes = (config: ClusterConfig): void => {
|
||||
const timeout = config.coordinator?.nodeTimeout || 15000;
|
||||
const now = Date.now();
|
||||
|
||||
for (const [nodeId, node] of nodes.entries()) {
|
||||
if (now - node.lastHeartbeat > timeout) {
|
||||
console.log(`Marking node ${nodeId} as unhealthy (last heartbeat: ${new Date(node.lastHeartbeat).toISOString()})`);
|
||||
node.status = 'unhealthy';
|
||||
|
||||
// Remove server replicas for this node
|
||||
for (const [serverId, replicas] of serverReplicas.entries()) {
|
||||
const updatedReplicas = replicas.filter(r => r.nodeId !== nodeId);
|
||||
if (updatedReplicas.length === 0) {
|
||||
serverReplicas.delete(serverId);
|
||||
} else {
|
||||
serverReplicas.set(serverId, updatedReplicas);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up expired session affinities
|
||||
const _sessionTimeout = config.coordinator?.stickySessionTimeout || 3600000; // 1 hour
|
||||
for (const [sessionId, affinity] of sessionAffinities.entries()) {
|
||||
if (now > affinity.expiresAt) {
|
||||
sessionAffinities.delete(sessionId);
|
||||
console.log(`Removed expired session affinity: ${sessionId}`);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Register a node (coordinator endpoint)
|
||||
*/
|
||||
export const registerNode = (nodeInfo: ClusterNode): void => {
|
||||
nodes.set(nodeInfo.id, {
|
||||
...nodeInfo,
|
||||
status: 'active',
|
||||
lastHeartbeat: Date.now(),
|
||||
});
|
||||
|
||||
// Update server replicas
|
||||
for (const serverId of nodeInfo.servers) {
|
||||
const replicas = serverReplicas.get(serverId) || [];
|
||||
|
||||
// Check if replica already exists
|
||||
const existingIndex = replicas.findIndex(r => r.nodeId === nodeInfo.id);
|
||||
const replica: ServerReplica = {
|
||||
serverId,
|
||||
nodeId: nodeInfo.id,
|
||||
nodeUrl: nodeInfo.url,
|
||||
status: 'active',
|
||||
weight: 1,
|
||||
};
|
||||
|
||||
if (existingIndex >= 0) {
|
||||
replicas[existingIndex] = replica;
|
||||
} else {
|
||||
replicas.push(replica);
|
||||
}
|
||||
|
||||
serverReplicas.set(serverId, replicas);
|
||||
}
|
||||
|
||||
console.log(`Node registered: ${nodeInfo.name} (${nodeInfo.id}) with ${nodeInfo.servers.length} servers`);
|
||||
};
|
||||
|
||||
/**
|
||||
* Update node heartbeat (coordinator endpoint)
|
||||
*/
|
||||
export const updateNodeHeartbeat = (nodeId: string, servers: string[]): void => {
|
||||
const node = nodes.get(nodeId);
|
||||
if (!node) {
|
||||
console.warn(`Received heartbeat from unknown node: ${nodeId}`);
|
||||
return;
|
||||
}
|
||||
|
||||
node.lastHeartbeat = Date.now();
|
||||
node.status = 'active';
|
||||
node.servers = servers;
|
||||
|
||||
// Update server replicas
|
||||
const currentReplicas = new Set<string>();
|
||||
for (const [serverId, replicas] of serverReplicas.entries()) {
|
||||
for (const replica of replicas) {
|
||||
if (replica.nodeId === nodeId) {
|
||||
currentReplicas.add(serverId);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Add new servers
|
||||
for (const serverId of servers) {
|
||||
if (!currentReplicas.has(serverId)) {
|
||||
const replicas = serverReplicas.get(serverId) || [];
|
||||
replicas.push({
|
||||
serverId,
|
||||
nodeId,
|
||||
nodeUrl: node.url,
|
||||
status: 'active',
|
||||
weight: 1,
|
||||
});
|
||||
serverReplicas.set(serverId, replicas);
|
||||
}
|
||||
}
|
||||
|
||||
// Remove servers that are no longer on this node
|
||||
for (const serverId of currentReplicas) {
|
||||
if (!servers.includes(serverId)) {
|
||||
const replicas = serverReplicas.get(serverId) || [];
|
||||
const updatedReplicas = replicas.filter(r => r.nodeId !== nodeId);
|
||||
if (updatedReplicas.length === 0) {
|
||||
serverReplicas.delete(serverId);
|
||||
} else {
|
||||
serverReplicas.set(serverId, updatedReplicas);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Get all active nodes (coordinator)
|
||||
*/
|
||||
export const getActiveNodes = (): ClusterNode[] => {
|
||||
return Array.from(nodes.values()).filter(n => n.status === 'active');
|
||||
};
|
||||
|
||||
/**
|
||||
* Get all nodes including unhealthy ones (coordinator)
|
||||
*/
|
||||
export const getAllNodes = (): ClusterNode[] => {
|
||||
return Array.from(nodes.values());
|
||||
};
|
||||
|
||||
/**
|
||||
* Get replicas for a specific server
|
||||
*/
|
||||
export const getServerReplicas = (serverId: string): ServerReplica[] => {
|
||||
return serverReplicas.get(serverId) || [];
|
||||
};
|
||||
|
||||
/**
|
||||
* Get node for a session using sticky session strategy
|
||||
*/
|
||||
export const getNodeForSession = (
|
||||
sessionId: string,
|
||||
serverId?: string,
|
||||
headers?: Record<string, string | string[] | undefined>
|
||||
): ClusterNode | null => {
|
||||
const config = getClusterConfig();
|
||||
|
||||
if (!config?.enabled || !config.stickySession?.enabled) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Check if session already has affinity
|
||||
const existingAffinity = sessionAffinities.get(sessionId);
|
||||
if (existingAffinity) {
|
||||
const node = nodes.get(existingAffinity.nodeId);
|
||||
if (node && node.status === 'active') {
|
||||
// Update last accessed time
|
||||
existingAffinity.lastAccessed = Date.now();
|
||||
return node;
|
||||
} else {
|
||||
// Node is no longer active, remove affinity
|
||||
sessionAffinities.delete(sessionId);
|
||||
}
|
||||
}
|
||||
|
||||
// Determine which node to use based on strategy
|
||||
const strategy = config.stickySession.strategy || 'consistent-hash';
|
||||
let targetNode: ClusterNode | null = null;
|
||||
|
||||
switch (strategy) {
|
||||
case 'consistent-hash':
|
||||
targetNode = getNodeByConsistentHash(sessionId, serverId);
|
||||
break;
|
||||
case 'cookie':
|
||||
targetNode = getNodeByCookie(headers, serverId);
|
||||
break;
|
||||
case 'header':
|
||||
targetNode = getNodeByHeader(headers, serverId);
|
||||
break;
|
||||
}
|
||||
|
||||
if (targetNode) {
|
||||
// Create session affinity
|
||||
const timeout = config.coordinator?.stickySessionTimeout || 3600000;
|
||||
const affinity: SessionAffinity = {
|
||||
sessionId,
|
||||
nodeId: targetNode.id,
|
||||
serverId,
|
||||
createdAt: Date.now(),
|
||||
lastAccessed: Date.now(),
|
||||
expiresAt: Date.now() + timeout,
|
||||
};
|
||||
sessionAffinities.set(sessionId, affinity);
|
||||
}
|
||||
|
||||
return targetNode;
|
||||
};
|
||||
|
||||
/**
|
||||
* Get node using consistent hashing
|
||||
*/
|
||||
const getNodeByConsistentHash = (sessionId: string, serverId?: string): ClusterNode | null => {
|
||||
let availableNodes = getActiveNodes();
|
||||
|
||||
// Filter nodes that have the server if serverId is specified
|
||||
if (serverId) {
|
||||
const replicas = getServerReplicas(serverId);
|
||||
const nodeIds = new Set(replicas.filter(r => r.status === 'active').map(r => r.nodeId));
|
||||
availableNodes = availableNodes.filter(n => nodeIds.has(n.id));
|
||||
}
|
||||
|
||||
if (availableNodes.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Simple consistent hash: hash session ID and mod by node count
|
||||
const hash = crypto.createHash('md5').update(sessionId).digest('hex');
|
||||
const hashNum = parseInt(hash.substring(0, 8), 16);
|
||||
const index = hashNum % availableNodes.length;
|
||||
|
||||
return availableNodes[index];
|
||||
};
|
||||
|
||||
/**
|
||||
* Get node from cookie
|
||||
*/
|
||||
const getNodeByCookie = (
|
||||
headers?: Record<string, string | string[] | undefined>,
|
||||
serverId?: string
|
||||
): ClusterNode | null => {
|
||||
if (!headers?.cookie) {
|
||||
return getNodeByConsistentHash(randomUUID(), serverId);
|
||||
}
|
||||
|
||||
const config = getClusterConfig();
|
||||
const cookieName = config?.stickySession?.cookieName || 'MCPHUB_NODE';
|
||||
|
||||
const cookies = (Array.isArray(headers.cookie) ? headers.cookie[0] : headers.cookie) || '';
|
||||
const cookieMatch = cookies.match(new RegExp(`${cookieName}=([^;]+)`));
|
||||
|
||||
if (cookieMatch) {
|
||||
const nodeId = cookieMatch[1];
|
||||
const node = nodes.get(nodeId);
|
||||
if (node && node.status === 'active') {
|
||||
return node;
|
||||
}
|
||||
}
|
||||
|
||||
return getNodeByConsistentHash(randomUUID(), serverId);
|
||||
};
|
||||
|
||||
/**
|
||||
* Get node from header
|
||||
*/
|
||||
const getNodeByHeader = (
|
||||
headers?: Record<string, string | string[] | undefined>,
|
||||
serverId?: string
|
||||
): ClusterNode | null => {
|
||||
const config = getClusterConfig();
|
||||
const headerName = (config?.stickySession?.headerName || 'X-MCPHub-Node').toLowerCase();
|
||||
|
||||
if (headers) {
|
||||
const nodeId = headers[headerName];
|
||||
if (nodeId) {
|
||||
const nodeIdStr = Array.isArray(nodeId) ? nodeId[0] : nodeId;
|
||||
const node = nodes.get(nodeIdStr);
|
||||
if (node && node.status === 'active') {
|
||||
return node;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return getNodeByConsistentHash(randomUUID(), serverId);
|
||||
};
|
||||
|
||||
/**
|
||||
* Get session affinity info for a session
|
||||
*/
|
||||
export const getSessionAffinity = (sessionId: string): SessionAffinity | null => {
|
||||
return sessionAffinities.get(sessionId) || null;
|
||||
};
|
||||
|
||||
/**
|
||||
* Remove session affinity
|
||||
*/
|
||||
export const removeSessionAffinity = (sessionId: string): void => {
|
||||
sessionAffinities.delete(sessionId);
|
||||
};
|
||||
|
||||
/**
|
||||
* Shutdown cluster service
|
||||
*/
|
||||
export const shutdownClusterService = (): void => {
|
||||
if (heartbeatIntervalId) {
|
||||
clearInterval(heartbeatIntervalId);
|
||||
heartbeatIntervalId = null;
|
||||
}
|
||||
|
||||
if (cleanupIntervalId) {
|
||||
clearInterval(cleanupIntervalId);
|
||||
cleanupIntervalId = null;
|
||||
}
|
||||
|
||||
console.log('Cluster service shut down');
|
||||
};
|
||||
|
||||
/**
|
||||
* Get cluster statistics
|
||||
*/
|
||||
export const getClusterStats = () => {
|
||||
return {
|
||||
nodes: nodes.size,
|
||||
activeNodes: getActiveNodes().length,
|
||||
servers: serverReplicas.size,
|
||||
sessions: sessionAffinities.size,
|
||||
};
|
||||
};
|
||||
@@ -369,6 +369,118 @@ export const createTransportFromConfig = async (name: string, conf: ServerConfig
|
||||
return transport;
|
||||
};
|
||||
|
||||
// Helper function to connect an on-demand server temporarily
|
||||
const connectOnDemandServer = async (serverInfo: ServerInfo): Promise<void> => {
|
||||
if (!serverInfo.config) {
|
||||
throw new Error(`Server configuration not found for on-demand server: ${serverInfo.name}`);
|
||||
}
|
||||
|
||||
console.log(`Connecting on-demand server: ${serverInfo.name}`);
|
||||
|
||||
// Create transport
|
||||
const transport = await createTransportFromConfig(serverInfo.name, serverInfo.config);
|
||||
|
||||
// Create client
|
||||
const client = new Client(
|
||||
{
|
||||
name: `mcp-client-${serverInfo.name}`,
|
||||
version: '1.0.0',
|
||||
},
|
||||
{
|
||||
capabilities: {
|
||||
prompts: {},
|
||||
resources: {},
|
||||
tools: {},
|
||||
},
|
||||
},
|
||||
);
|
||||
|
||||
// Get request options from server configuration
|
||||
const serverRequestOptions = serverInfo.config.options || {};
|
||||
const requestOptions = {
|
||||
timeout: serverRequestOptions.timeout || 60000,
|
||||
resetTimeoutOnProgress: serverRequestOptions.resetTimeoutOnProgress || false,
|
||||
maxTotalTimeout: serverRequestOptions.maxTotalTimeout,
|
||||
};
|
||||
|
||||
// Connect the client
|
||||
await client.connect(transport, requestOptions);
|
||||
|
||||
// Update server info with client and transport
|
||||
serverInfo.client = client;
|
||||
serverInfo.transport = transport;
|
||||
serverInfo.options = requestOptions;
|
||||
serverInfo.status = 'connected';
|
||||
|
||||
console.log(`Successfully connected on-demand server: ${serverInfo.name}`);
|
||||
|
||||
// List tools if not already loaded
|
||||
if (serverInfo.tools.length === 0) {
|
||||
const capabilities = client.getServerCapabilities();
|
||||
if (capabilities?.tools) {
|
||||
try {
|
||||
const tools = await client.listTools({}, requestOptions);
|
||||
serverInfo.tools = tools.tools.map((tool) => ({
|
||||
name: `${serverInfo.name}${getNameSeparator()}${tool.name}`,
|
||||
description: tool.description || '',
|
||||
inputSchema: cleanInputSchema(tool.inputSchema || {}),
|
||||
}));
|
||||
// Save tools as vector embeddings for search
|
||||
saveToolsAsVectorEmbeddings(serverInfo.name, serverInfo.tools);
|
||||
console.log(`Loaded ${serverInfo.tools.length} tools for on-demand server: ${serverInfo.name}`);
|
||||
} catch (error) {
|
||||
console.warn(`Failed to list tools for on-demand server ${serverInfo.name}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
// List prompts if available
|
||||
if (capabilities?.prompts) {
|
||||
try {
|
||||
const prompts = await client.listPrompts({}, requestOptions);
|
||||
serverInfo.prompts = prompts.prompts.map((prompt) => ({
|
||||
name: `${serverInfo.name}${getNameSeparator()}${prompt.name}`,
|
||||
title: prompt.title,
|
||||
description: prompt.description,
|
||||
arguments: prompt.arguments,
|
||||
}));
|
||||
console.log(`Loaded ${serverInfo.prompts.length} prompts for on-demand server: ${serverInfo.name}`);
|
||||
} catch (error) {
|
||||
console.warn(`Failed to list prompts for on-demand server ${serverInfo.name}:`, error);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Helper function to disconnect an on-demand server
|
||||
const disconnectOnDemandServer = (serverInfo: ServerInfo): void => {
|
||||
if (serverInfo.connectionMode !== 'on-demand') {
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`Disconnecting on-demand server: ${serverInfo.name}`);
|
||||
|
||||
try {
|
||||
if (serverInfo.client) {
|
||||
serverInfo.client.close();
|
||||
serverInfo.client = undefined;
|
||||
}
|
||||
if (serverInfo.transport) {
|
||||
serverInfo.transport.close();
|
||||
serverInfo.transport = undefined;
|
||||
}
|
||||
serverInfo.status = 'disconnected';
|
||||
console.log(`Successfully disconnected on-demand server: ${serverInfo.name}`);
|
||||
} catch (error) {
|
||||
// Log disconnect errors but don't throw - this is cleanup code that shouldn't fail the request
|
||||
// The connection is likely already closed if we get an error here
|
||||
console.warn(`Error disconnecting on-demand server ${serverInfo.name}:`, error);
|
||||
// Force status to disconnected even if cleanup had errors
|
||||
serverInfo.status = 'disconnected';
|
||||
serverInfo.client = undefined;
|
||||
serverInfo.transport = undefined;
|
||||
}
|
||||
};
|
||||
|
||||
// Helper function to handle client.callTool with reconnection logic
|
||||
const callToolWithReconnect = async (
|
||||
serverInfo: ServerInfo,
|
||||
@@ -529,7 +641,6 @@ export const initializeClientsFromSettings = async (
|
||||
continue;
|
||||
}
|
||||
|
||||
let transport;
|
||||
let openApiClient;
|
||||
if (expandedConf.type === 'openapi') {
|
||||
// Handle OpenAPI type servers
|
||||
@@ -600,10 +711,43 @@ export const initializeClientsFromSettings = async (
|
||||
serverInfo.error = `Failed to initialize OpenAPI server: ${error}`;
|
||||
continue;
|
||||
}
|
||||
} else {
|
||||
transport = await createTransportFromConfig(name, expandedConf);
|
||||
}
|
||||
|
||||
// Handle on-demand connection mode servers
|
||||
// These servers connect briefly to get tools list, then disconnect
|
||||
const connectionMode = expandedConf.connectionMode || 'persistent';
|
||||
if (connectionMode === 'on-demand') {
|
||||
console.log(`Initializing on-demand server: ${name}`);
|
||||
const serverInfo: ServerInfo = {
|
||||
name,
|
||||
owner: expandedConf.owner,
|
||||
status: 'disconnected',
|
||||
error: null,
|
||||
tools: [],
|
||||
prompts: [],
|
||||
createTime: Date.now(),
|
||||
enabled: expandedConf.enabled === undefined ? true : expandedConf.enabled,
|
||||
connectionMode: 'on-demand',
|
||||
config: expandedConf,
|
||||
};
|
||||
nextServerInfos.push(serverInfo);
|
||||
|
||||
// Connect briefly to get tools list, then disconnect
|
||||
try {
|
||||
await connectOnDemandServer(serverInfo);
|
||||
console.log(`Successfully initialized on-demand server: ${name} with ${serverInfo.tools.length} tools`);
|
||||
// Disconnect immediately after getting tools
|
||||
disconnectOnDemandServer(serverInfo);
|
||||
} catch (error) {
|
||||
console.error(`Failed to initialize on-demand server ${name}:`, error);
|
||||
serverInfo.error = `Failed to initialize: ${error}`;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
// Create transport for persistent connection mode servers (not OpenAPI, already handled above)
|
||||
const transport = await createTransportFromConfig(name, expandedConf);
|
||||
|
||||
const client = new Client(
|
||||
{
|
||||
name: `mcp-client-${name}`,
|
||||
@@ -644,6 +788,7 @@ export const initializeClientsFromSettings = async (
|
||||
transport,
|
||||
options: requestOptions,
|
||||
createTime: Date.now(),
|
||||
connectionMode: connectionMode,
|
||||
config: expandedConf, // Store reference to expanded config
|
||||
};
|
||||
|
||||
@@ -1011,8 +1156,11 @@ export const handleListToolsRequest = async (_: any, extra: any) => {
|
||||
const targetGroup = group?.startsWith('$smart/') ? group.substring(7) : undefined;
|
||||
|
||||
// Get info about available servers, filtered by target group if specified
|
||||
// Include both connected persistent servers and on-demand servers (even if disconnected)
|
||||
let availableServers = serverInfos.filter(
|
||||
(server) => server.status === 'connected' && server.enabled !== false,
|
||||
(server) =>
|
||||
server.enabled !== false &&
|
||||
(server.status === 'connected' || server.connectionMode === 'on-demand'),
|
||||
);
|
||||
|
||||
// If a target group is specified, filter servers to only those in the group
|
||||
@@ -1139,6 +1287,10 @@ Available servers: ${serversList}`,
|
||||
export const handleCallToolRequest = async (request: any, extra: any) => {
|
||||
console.log(`Handling CallToolRequest for tool: ${JSON.stringify(request.params)}`);
|
||||
try {
|
||||
// Note: On-demand server connection and disconnection are handled in the specific
|
||||
// code paths below (call_tool and regular tool handling) with try-finally blocks.
|
||||
// This outer try-catch only handles errors from operations that don't connect servers.
|
||||
|
||||
// Special handling for agent group tools
|
||||
if (request.params.name === 'search_tools') {
|
||||
const { query, limit = 10 } = request.params.arguments || {};
|
||||
@@ -1284,10 +1436,11 @@ export const handleCallToolRequest = async (request: any, extra: any) => {
|
||||
targetServerInfo = getServerByName(extra.server);
|
||||
} else {
|
||||
// Find the first server that has this tool
|
||||
// Include both connected servers and on-demand servers (even if disconnected)
|
||||
targetServerInfo = serverInfos.find(
|
||||
(serverInfo) =>
|
||||
serverInfo.status === 'connected' &&
|
||||
serverInfo.enabled !== false &&
|
||||
(serverInfo.status === 'connected' || serverInfo.connectionMode === 'on-demand') &&
|
||||
serverInfo.tools.some((tool) => tool.name === toolName),
|
||||
);
|
||||
}
|
||||
@@ -1363,6 +1516,11 @@ export const handleCallToolRequest = async (request: any, extra: any) => {
|
||||
}
|
||||
|
||||
// Call the tool on the target server (MCP servers)
|
||||
// Connect on-demand server if needed
|
||||
if (targetServerInfo.connectionMode === 'on-demand' && !targetServerInfo.client) {
|
||||
await connectOnDemandServer(targetServerInfo);
|
||||
}
|
||||
|
||||
const client = targetServerInfo.client;
|
||||
if (!client) {
|
||||
throw new Error(`Client not found for server: ${targetServerInfo.name}`);
|
||||
@@ -1379,17 +1537,23 @@ export const handleCallToolRequest = async (request: any, extra: any) => {
|
||||
const separator = getNameSeparator();
|
||||
const prefix = `${targetServerInfo.name}${separator}`;
|
||||
toolName = toolName.startsWith(prefix) ? toolName.substring(prefix.length) : toolName;
|
||||
const result = await callToolWithReconnect(
|
||||
targetServerInfo,
|
||||
{
|
||||
name: toolName,
|
||||
arguments: finalArgs,
|
||||
},
|
||||
targetServerInfo.options || {},
|
||||
);
|
||||
|
||||
try {
|
||||
const result = await callToolWithReconnect(
|
||||
targetServerInfo,
|
||||
{
|
||||
name: toolName,
|
||||
arguments: finalArgs,
|
||||
},
|
||||
targetServerInfo.options || {},
|
||||
);
|
||||
|
||||
console.log(`Tool invocation result: ${JSON.stringify(result)}`);
|
||||
return result;
|
||||
console.log(`Tool invocation result: ${JSON.stringify(result)}`);
|
||||
return result;
|
||||
} finally {
|
||||
// Disconnect on-demand server after tool call
|
||||
disconnectOnDemandServer(targetServerInfo);
|
||||
}
|
||||
}
|
||||
|
||||
// Regular tool handling
|
||||
@@ -1459,6 +1623,11 @@ export const handleCallToolRequest = async (request: any, extra: any) => {
|
||||
}
|
||||
|
||||
// Handle MCP servers
|
||||
// Connect on-demand server if needed
|
||||
if (serverInfo.connectionMode === 'on-demand' && !serverInfo.client) {
|
||||
await connectOnDemandServer(serverInfo);
|
||||
}
|
||||
|
||||
const client = serverInfo.client;
|
||||
if (!client) {
|
||||
throw new Error(`Client not found for server: ${serverInfo.name}`);
|
||||
@@ -1469,13 +1638,19 @@ export const handleCallToolRequest = async (request: any, extra: any) => {
|
||||
request.params.name = request.params.name.startsWith(prefix)
|
||||
? request.params.name.substring(prefix.length)
|
||||
: request.params.name;
|
||||
const result = await callToolWithReconnect(
|
||||
serverInfo,
|
||||
request.params,
|
||||
serverInfo.options || {},
|
||||
);
|
||||
console.log(`Tool call result: ${JSON.stringify(result)}`);
|
||||
return result;
|
||||
|
||||
try {
|
||||
const result = await callToolWithReconnect(
|
||||
serverInfo,
|
||||
request.params,
|
||||
serverInfo.options || {},
|
||||
);
|
||||
console.log(`Tool call result: ${JSON.stringify(result)}`);
|
||||
return result;
|
||||
} finally {
|
||||
// Disconnect on-demand server after tool call
|
||||
disconnectOnDemandServer(serverInfo);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Error handling CallToolRequest: ${error}`);
|
||||
return {
|
||||
|
||||
@@ -171,7 +171,6 @@ export interface SystemConfig {
|
||||
};
|
||||
nameSeparator?: string; // Separator used between server name and tool/prompt name (default: '-')
|
||||
oauth?: OAuthProviderConfig; // OAuth provider configuration for upstream MCP servers
|
||||
cluster?: ClusterConfig; // Cluster configuration for distributed deployment
|
||||
}
|
||||
|
||||
export interface UserConfig {
|
||||
@@ -205,6 +204,7 @@ export interface ServerConfig {
|
||||
enabled?: boolean; // Flag to enable/disable the server
|
||||
owner?: string; // Owner of the server, defaults to 'admin' user
|
||||
keepAliveInterval?: number; // Keep-alive ping interval in milliseconds (default: 60000ms for SSE servers)
|
||||
connectionMode?: 'persistent' | 'on-demand'; // Connection strategy: 'persistent' maintains long-running connections (default), 'on-demand' connects only when tools are called
|
||||
tools?: Record<string, { enabled: boolean; description?: string }>; // Tool-specific configurations with enable/disable state and custom descriptions
|
||||
prompts?: Record<string, { enabled: boolean; description?: string }>; // Prompt-specific configurations with enable/disable state and custom descriptions
|
||||
options?: Partial<Pick<RequestOptions, 'timeout' | 'resetTimeoutOnProgress' | 'maxTotalTimeout'>>; // MCP request options configuration
|
||||
@@ -313,6 +313,7 @@ export interface ServerInfo {
|
||||
options?: RequestOptions; // Options for requests
|
||||
createTime: number; // Timestamp of when the server was created
|
||||
enabled?: boolean; // Flag to indicate if the server is enabled
|
||||
connectionMode?: 'persistent' | 'on-demand'; // Connection strategy for this server
|
||||
keepAliveIntervalId?: NodeJS.Timeout; // Timer ID for keep-alive ping interval
|
||||
config?: ServerConfig; // Reference to the original server configuration for OpenAPI passthrough headers
|
||||
oauth?: {
|
||||
@@ -357,63 +358,3 @@ export interface AddServerRequest {
|
||||
name: string; // Name of the server to add
|
||||
config: ServerConfig; // Configuration details for the server
|
||||
}
|
||||
|
||||
// Cluster-related types
|
||||
|
||||
// Cluster node information
|
||||
export interface ClusterNode {
|
||||
id: string; // Unique identifier for the node (e.g., UUID)
|
||||
name: string; // Human-readable name of the node
|
||||
host: string; // Hostname or IP address
|
||||
port: number; // Port number the node is running on
|
||||
url: string; // Full URL to access the node (e.g., 'http://node1:3000')
|
||||
status: 'active' | 'inactive' | 'unhealthy'; // Current status of the node
|
||||
lastHeartbeat: number; // Timestamp of last heartbeat
|
||||
servers: string[]; // List of MCP server names hosted on this node
|
||||
metadata?: Record<string, any>; // Additional metadata about the node
|
||||
}
|
||||
|
||||
// Cluster configuration
|
||||
export interface ClusterConfig {
|
||||
enabled: boolean; // Whether cluster mode is enabled
|
||||
mode: 'standalone' | 'node' | 'coordinator'; // Cluster operating mode
|
||||
node?: {
|
||||
// Configuration when running as a cluster node
|
||||
id?: string; // Node ID (generated if not provided)
|
||||
name?: string; // Node name (defaults to hostname)
|
||||
coordinatorUrl: string; // URL of the coordinator node
|
||||
heartbeatInterval?: number; // Heartbeat interval in milliseconds (default: 5000)
|
||||
registerOnStartup?: boolean; // Whether to register with coordinator on startup (default: true)
|
||||
};
|
||||
coordinator?: {
|
||||
// Configuration when running as coordinator
|
||||
nodeTimeout?: number; // Time in ms before marking a node as unhealthy (default: 15000)
|
||||
cleanupInterval?: number; // Interval for cleaning up inactive nodes (default: 30000)
|
||||
stickySessionTimeout?: number; // Sticky session timeout in milliseconds (default: 3600000, 1 hour)
|
||||
};
|
||||
stickySession?: {
|
||||
enabled: boolean; // Whether sticky sessions are enabled (default: true for cluster mode)
|
||||
strategy: 'consistent-hash' | 'cookie' | 'header'; // Strategy for session affinity (default: consistent-hash)
|
||||
cookieName?: string; // Cookie name for cookie-based sticky sessions (default: 'MCPHUB_NODE')
|
||||
headerName?: string; // Header name for header-based sticky sessions (default: 'X-MCPHub-Node')
|
||||
};
|
||||
}
|
||||
|
||||
// Cluster server replica configuration
|
||||
export interface ServerReplica {
|
||||
serverId: string; // MCP server identifier
|
||||
nodeId: string; // Node hosting this replica
|
||||
nodeUrl: string; // URL to access this replica
|
||||
status: 'active' | 'inactive'; // Status of this replica
|
||||
weight?: number; // Load balancing weight (default: 1)
|
||||
}
|
||||
|
||||
// Session affinity information
|
||||
export interface SessionAffinity {
|
||||
sessionId: string; // Session identifier
|
||||
nodeId: string; // Node ID for this session
|
||||
serverId?: string; // Optional: specific server this session is bound to
|
||||
createdAt: number; // Timestamp when session was created
|
||||
lastAccessed: number; // Timestamp of last access
|
||||
expiresAt: number; // Timestamp when session expires
|
||||
}
|
||||
|
||||
@@ -1,335 +0,0 @@
|
||||
/**
|
||||
* Cluster Service Tests
|
||||
*/
|
||||
|
||||
import {
|
||||
isClusterEnabled,
|
||||
getClusterMode,
|
||||
registerNode,
|
||||
updateNodeHeartbeat,
|
||||
getActiveNodes,
|
||||
getAllNodes,
|
||||
getServerReplicas,
|
||||
getNodeForSession,
|
||||
getSessionAffinity,
|
||||
removeSessionAffinity,
|
||||
getClusterStats,
|
||||
shutdownClusterService,
|
||||
} from '../../src/services/clusterService';
|
||||
import { ClusterNode } from '../../src/types/index';
|
||||
import * as configModule from '../../src/config/index.js';
|
||||
|
||||
// Mock the config module
|
||||
jest.mock('../../src/config/index.js', () => ({
|
||||
loadSettings: jest.fn(),
|
||||
}));
|
||||
|
||||
describe('Cluster Service', () => {
|
||||
const loadSettings = configModule.loadSettings as jest.MockedFunction<typeof configModule.loadSettings>;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up cluster service to reset state
|
||||
shutdownClusterService();
|
||||
});
|
||||
|
||||
describe('Configuration', () => {
|
||||
it('should return false when cluster is not enabled', () => {
|
||||
loadSettings.mockReturnValue({
|
||||
mcpServers: {},
|
||||
});
|
||||
|
||||
expect(isClusterEnabled()).toBe(false);
|
||||
});
|
||||
|
||||
it('should return true when cluster is enabled', () => {
|
||||
loadSettings.mockReturnValue({
|
||||
mcpServers: {},
|
||||
systemConfig: {
|
||||
cluster: {
|
||||
enabled: true,
|
||||
mode: 'coordinator',
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
expect(isClusterEnabled()).toBe(true);
|
||||
});
|
||||
|
||||
it('should return standalone mode when cluster is not configured', () => {
|
||||
loadSettings.mockReturnValue({
|
||||
mcpServers: {},
|
||||
});
|
||||
|
||||
expect(getClusterMode()).toBe('standalone');
|
||||
});
|
||||
|
||||
it('should return configured mode when cluster is enabled', () => {
|
||||
loadSettings.mockReturnValue({
|
||||
mcpServers: {},
|
||||
systemConfig: {
|
||||
cluster: {
|
||||
enabled: true,
|
||||
mode: 'coordinator',
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
expect(getClusterMode()).toBe('coordinator');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Node Management', () => {
|
||||
beforeEach(() => {
|
||||
loadSettings.mockReturnValue({
|
||||
mcpServers: {},
|
||||
systemConfig: {
|
||||
cluster: {
|
||||
enabled: true,
|
||||
mode: 'coordinator',
|
||||
},
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should register a new node', () => {
|
||||
const node: ClusterNode = {
|
||||
id: 'node-test-1',
|
||||
name: 'Test Node 1',
|
||||
host: 'localhost',
|
||||
port: 3001,
|
||||
url: 'http://localhost:3001',
|
||||
status: 'active',
|
||||
lastHeartbeat: Date.now(),
|
||||
servers: ['server1', 'server2'],
|
||||
};
|
||||
|
||||
registerNode(node);
|
||||
const nodes = getAllNodes();
|
||||
|
||||
// Find our node (there might be others from previous tests)
|
||||
const registeredNode = nodes.find(n => n.id === 'node-test-1');
|
||||
expect(registeredNode).toBeTruthy();
|
||||
expect(registeredNode?.name).toBe('Test Node 1');
|
||||
expect(registeredNode?.servers).toEqual(['server1', 'server2']);
|
||||
});
|
||||
|
||||
it('should update node heartbeat', () => {
|
||||
const node: ClusterNode = {
|
||||
id: 'node-test-2',
|
||||
name: 'Test Node 2',
|
||||
host: 'localhost',
|
||||
port: 3001,
|
||||
url: 'http://localhost:3001',
|
||||
status: 'active',
|
||||
lastHeartbeat: Date.now() - 10000,
|
||||
servers: ['server1'],
|
||||
};
|
||||
|
||||
registerNode(node);
|
||||
const beforeHeartbeat = getAllNodes().find(n => n.id === 'node-test-2')?.lastHeartbeat || 0;
|
||||
|
||||
// Wait a bit to ensure timestamp changes
|
||||
setTimeout(() => {
|
||||
updateNodeHeartbeat('node-test-2', ['server1', 'server2']);
|
||||
const updatedNode = getAllNodes().find(n => n.id === 'node-test-2');
|
||||
const afterHeartbeat = updatedNode?.lastHeartbeat || 0;
|
||||
|
||||
expect(afterHeartbeat).toBeGreaterThan(beforeHeartbeat);
|
||||
expect(updatedNode?.servers).toEqual(['server1', 'server2']);
|
||||
}, 10);
|
||||
});
|
||||
|
||||
it('should get active nodes only', () => {
|
||||
const node1: ClusterNode = {
|
||||
id: 'node-active-1',
|
||||
name: 'Active Node',
|
||||
host: 'localhost',
|
||||
port: 3001,
|
||||
url: 'http://localhost:3001',
|
||||
status: 'active',
|
||||
lastHeartbeat: Date.now(),
|
||||
servers: ['server1'],
|
||||
};
|
||||
|
||||
registerNode(node1);
|
||||
|
||||
const activeNodes = getActiveNodes();
|
||||
const activeNode = activeNodes.find(n => n.id === 'node-active-1');
|
||||
expect(activeNode).toBeTruthy();
|
||||
expect(activeNode?.status).toBe('active');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Server Replicas', () => {
|
||||
beforeEach(() => {
|
||||
loadSettings.mockReturnValue({
|
||||
mcpServers: {},
|
||||
systemConfig: {
|
||||
cluster: {
|
||||
enabled: true,
|
||||
mode: 'coordinator',
|
||||
},
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should track server replicas across nodes', () => {
|
||||
const node1: ClusterNode = {
|
||||
id: 'node-replica-1',
|
||||
name: 'Node 1',
|
||||
host: 'localhost',
|
||||
port: 3001,
|
||||
url: 'http://localhost:3001',
|
||||
status: 'active',
|
||||
lastHeartbeat: Date.now(),
|
||||
servers: ['test-server-1', 'test-server-2'],
|
||||
};
|
||||
|
||||
const node2: ClusterNode = {
|
||||
id: 'node-replica-2',
|
||||
name: 'Node 2',
|
||||
host: 'localhost',
|
||||
port: 3002,
|
||||
url: 'http://localhost:3002',
|
||||
status: 'active',
|
||||
lastHeartbeat: Date.now(),
|
||||
servers: ['test-server-1', 'test-server-3'],
|
||||
};
|
||||
|
||||
registerNode(node1);
|
||||
registerNode(node2);
|
||||
|
||||
const server1Replicas = getServerReplicas('test-server-1');
|
||||
expect(server1Replicas.length).toBeGreaterThanOrEqual(2);
|
||||
expect(server1Replicas.map(r => r.nodeId)).toContain('node-replica-1');
|
||||
expect(server1Replicas.map(r => r.nodeId)).toContain('node-replica-2');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Session Affinity', () => {
|
||||
beforeEach(() => {
|
||||
loadSettings.mockReturnValue({
|
||||
mcpServers: {},
|
||||
systemConfig: {
|
||||
cluster: {
|
||||
enabled: true,
|
||||
mode: 'coordinator',
|
||||
stickySession: {
|
||||
enabled: true,
|
||||
strategy: 'consistent-hash',
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should maintain session affinity with consistent hash', () => {
|
||||
const node1: ClusterNode = {
|
||||
id: 'node-affinity-1',
|
||||
name: 'Node 1',
|
||||
host: 'localhost',
|
||||
port: 3001,
|
||||
url: 'http://localhost:3001',
|
||||
status: 'active',
|
||||
lastHeartbeat: Date.now(),
|
||||
servers: ['server1'],
|
||||
};
|
||||
|
||||
registerNode(node1);
|
||||
|
||||
const sessionId = 'test-session-consistent-hash';
|
||||
const firstNode = getNodeForSession(sessionId);
|
||||
const secondNode = getNodeForSession(sessionId);
|
||||
|
||||
expect(firstNode).toBeTruthy();
|
||||
expect(secondNode).toBeTruthy();
|
||||
expect(firstNode?.id).toBe(secondNode?.id);
|
||||
});
|
||||
|
||||
it('should create and retrieve session affinity', () => {
|
||||
const node1: ClusterNode = {
|
||||
id: 'node-affinity-2',
|
||||
name: 'Node 1',
|
||||
host: 'localhost',
|
||||
port: 3001,
|
||||
url: 'http://localhost:3001',
|
||||
status: 'active',
|
||||
lastHeartbeat: Date.now(),
|
||||
servers: ['server1'],
|
||||
};
|
||||
|
||||
registerNode(node1);
|
||||
|
||||
const sessionId = 'test-session-retrieve';
|
||||
const selectedNode = getNodeForSession(sessionId);
|
||||
|
||||
const affinity = getSessionAffinity(sessionId);
|
||||
expect(affinity).toBeTruthy();
|
||||
expect(affinity?.sessionId).toBe(sessionId);
|
||||
expect(affinity?.nodeId).toBe(selectedNode?.id);
|
||||
});
|
||||
|
||||
it('should remove session affinity', () => {
|
||||
const node1: ClusterNode = {
|
||||
id: 'node-affinity-3',
|
||||
name: 'Node 1',
|
||||
host: 'localhost',
|
||||
port: 3001,
|
||||
url: 'http://localhost:3001',
|
||||
status: 'active',
|
||||
lastHeartbeat: Date.now(),
|
||||
servers: ['server1'],
|
||||
};
|
||||
|
||||
registerNode(node1);
|
||||
|
||||
const sessionId = 'test-session-remove';
|
||||
getNodeForSession(sessionId);
|
||||
|
||||
let affinity = getSessionAffinity(sessionId);
|
||||
expect(affinity).toBeTruthy();
|
||||
|
||||
removeSessionAffinity(sessionId);
|
||||
affinity = getSessionAffinity(sessionId);
|
||||
expect(affinity).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Cluster Statistics', () => {
|
||||
beforeEach(() => {
|
||||
loadSettings.mockReturnValue({
|
||||
mcpServers: {},
|
||||
systemConfig: {
|
||||
cluster: {
|
||||
enabled: true,
|
||||
mode: 'coordinator',
|
||||
},
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should return cluster statistics', () => {
|
||||
const node1: ClusterNode = {
|
||||
id: 'node-stats-1',
|
||||
name: 'Node 1',
|
||||
host: 'localhost',
|
||||
port: 3001,
|
||||
url: 'http://localhost:3001',
|
||||
status: 'active',
|
||||
lastHeartbeat: Date.now(),
|
||||
servers: ['unique-server-1', 'unique-server-2'],
|
||||
};
|
||||
|
||||
registerNode(node1);
|
||||
|
||||
const stats = getClusterStats();
|
||||
expect(stats.nodes).toBeGreaterThanOrEqual(1);
|
||||
expect(stats.activeNodes).toBeGreaterThanOrEqual(1);
|
||||
expect(stats.servers).toBeGreaterThanOrEqual(2);
|
||||
});
|
||||
});
|
||||
});
|
||||
340
tests/services/mcpService-on-demand.test.ts
Normal file
340
tests/services/mcpService-on-demand.test.ts
Normal file
@@ -0,0 +1,340 @@
|
||||
import { describe, it, expect, jest, beforeEach, afterEach } from '@jest/globals';
|
||||
|
||||
// Mock dependencies before importing mcpService
|
||||
jest.mock('../../src/services/oauthService.js', () => ({
|
||||
initializeAllOAuthClients: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('../../src/services/oauthClientRegistration.js', () => ({
|
||||
registerOAuthClient: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('../../src/services/mcpOAuthProvider.js', () => ({
|
||||
createOAuthProvider: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('../../src/services/groupService.js', () => ({
|
||||
getServersInGroup: jest.fn(),
|
||||
getServerConfigInGroup: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('../../src/services/sseService.js', () => ({
|
||||
getGroup: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('../../src/services/vectorSearchService.js', () => ({
|
||||
saveToolsAsVectorEmbeddings: jest.fn(),
|
||||
searchToolsByVector: jest.fn(() => Promise.resolve([])),
|
||||
}));
|
||||
|
||||
jest.mock('../../src/services/services.js', () => ({
|
||||
getDataService: jest.fn(() => ({
|
||||
filterData: (data: any) => data,
|
||||
})),
|
||||
}));
|
||||
|
||||
jest.mock('../../src/config/index.js', () => ({
|
||||
default: {
|
||||
mcpHubName: 'test-hub',
|
||||
mcpHubVersion: '1.0.0',
|
||||
initTimeout: 60000,
|
||||
},
|
||||
loadSettings: jest.fn(() => ({})),
|
||||
expandEnvVars: jest.fn((val: string) => val),
|
||||
replaceEnvVars: jest.fn((obj: any) => obj),
|
||||
getNameSeparator: jest.fn(() => '-'),
|
||||
}));
|
||||
|
||||
// Mock Client
|
||||
const mockClient = {
|
||||
connect: jest.fn(),
|
||||
close: jest.fn(),
|
||||
listTools: jest.fn(),
|
||||
listPrompts: jest.fn(),
|
||||
getServerCapabilities: jest.fn(() => ({
|
||||
tools: {},
|
||||
prompts: {},
|
||||
})),
|
||||
callTool: jest.fn(),
|
||||
};
|
||||
|
||||
jest.mock('@modelcontextprotocol/sdk/client/index.js', () => ({
|
||||
Client: jest.fn(() => mockClient),
|
||||
}));
|
||||
|
||||
// Mock StdioClientTransport
|
||||
const mockTransport = {
|
||||
close: jest.fn(),
|
||||
stderr: null,
|
||||
};
|
||||
|
||||
jest.mock('@modelcontextprotocol/sdk/client/stdio.js', () => ({
|
||||
StdioClientTransport: jest.fn(() => mockTransport),
|
||||
}));
|
||||
|
||||
// Mock DAO
|
||||
const mockServerDao = {
|
||||
findAll: jest.fn(),
|
||||
findById: jest.fn(),
|
||||
create: jest.fn(),
|
||||
update: jest.fn(),
|
||||
delete: jest.fn(),
|
||||
exists: jest.fn(),
|
||||
setEnabled: jest.fn(),
|
||||
};
|
||||
|
||||
jest.mock('../../src/dao/index.js', () => ({
|
||||
getServerDao: jest.fn(() => mockServerDao),
|
||||
}));
|
||||
|
||||
import { initializeClientsFromSettings, handleCallToolRequest } from '../../src/services/mcpService.js';
|
||||
|
||||
describe('On-Demand MCP Server Connection Mode', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
mockClient.connect.mockResolvedValue(undefined);
|
||||
mockClient.close.mockReturnValue(undefined);
|
||||
mockClient.listTools.mockResolvedValue({
|
||||
tools: [
|
||||
{
|
||||
name: 'test-tool',
|
||||
description: 'Test tool',
|
||||
inputSchema: { type: 'object' },
|
||||
},
|
||||
],
|
||||
});
|
||||
mockClient.listPrompts.mockResolvedValue({
|
||||
prompts: [],
|
||||
});
|
||||
mockClient.callTool.mockResolvedValue({
|
||||
content: [{ type: 'text', text: 'Success' }],
|
||||
});
|
||||
mockTransport.close.mockReturnValue(undefined);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
describe('Server Initialization', () => {
|
||||
it('should not maintain persistent connection for on-demand servers', async () => {
|
||||
mockServerDao.findAll.mockResolvedValue([
|
||||
{
|
||||
name: 'on-demand-server',
|
||||
command: 'node',
|
||||
args: ['test.js'],
|
||||
connectionMode: 'on-demand',
|
||||
enabled: true,
|
||||
},
|
||||
]);
|
||||
|
||||
const serverInfos = await initializeClientsFromSettings(true);
|
||||
|
||||
expect(serverInfos).toHaveLength(1);
|
||||
expect(serverInfos[0].name).toBe('on-demand-server');
|
||||
expect(serverInfos[0].connectionMode).toBe('on-demand');
|
||||
expect(serverInfos[0].status).toBe('disconnected');
|
||||
// Should connect once to get tools, then disconnect
|
||||
expect(mockClient.connect).toHaveBeenCalledTimes(1);
|
||||
expect(mockTransport.close).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('should load tools during initialization for on-demand servers', async () => {
|
||||
mockServerDao.findAll.mockResolvedValue([
|
||||
{
|
||||
name: 'on-demand-server',
|
||||
command: 'node',
|
||||
args: ['test.js'],
|
||||
connectionMode: 'on-demand',
|
||||
enabled: true,
|
||||
},
|
||||
]);
|
||||
|
||||
const serverInfos = await initializeClientsFromSettings(true);
|
||||
|
||||
expect(serverInfos[0].tools).toHaveLength(1);
|
||||
expect(serverInfos[0].tools[0].name).toBe('on-demand-server-test-tool');
|
||||
expect(mockClient.listTools).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should maintain persistent connection for default connection mode', async () => {
|
||||
mockServerDao.findAll.mockResolvedValue([
|
||||
{
|
||||
name: 'persistent-server',
|
||||
command: 'node',
|
||||
args: ['test.js'],
|
||||
enabled: true,
|
||||
},
|
||||
]);
|
||||
|
||||
const serverInfos = await initializeClientsFromSettings(true);
|
||||
|
||||
expect(serverInfos).toHaveLength(1);
|
||||
expect(serverInfos[0].connectionMode).toBe('persistent');
|
||||
expect(mockClient.connect).toHaveBeenCalledTimes(1);
|
||||
// Should not disconnect immediately
|
||||
expect(mockTransport.close).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle initialization errors for on-demand servers gracefully', async () => {
|
||||
mockClient.connect.mockRejectedValueOnce(new Error('Connection failed'));
|
||||
mockServerDao.findAll.mockResolvedValue([
|
||||
{
|
||||
name: 'failing-server',
|
||||
command: 'node',
|
||||
args: ['test.js'],
|
||||
connectionMode: 'on-demand',
|
||||
enabled: true,
|
||||
},
|
||||
]);
|
||||
|
||||
const serverInfos = await initializeClientsFromSettings(true);
|
||||
|
||||
expect(serverInfos).toHaveLength(1);
|
||||
expect(serverInfos[0].status).toBe('disconnected');
|
||||
expect(serverInfos[0].error).toContain('Failed to initialize');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Tool Invocation with On-Demand Servers', () => {
|
||||
beforeEach(async () => {
|
||||
// Set up server infos with an on-demand server that's disconnected
|
||||
mockServerDao.findAll.mockResolvedValue([
|
||||
{
|
||||
name: 'on-demand-server',
|
||||
command: 'node',
|
||||
args: ['test.js'],
|
||||
connectionMode: 'on-demand',
|
||||
enabled: true,
|
||||
},
|
||||
]);
|
||||
|
||||
// Initialize to get the server set up
|
||||
await initializeClientsFromSettings(true);
|
||||
|
||||
// Clear mocks after initialization
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Reset mock implementations
|
||||
mockClient.connect.mockResolvedValue(undefined);
|
||||
mockClient.listTools.mockResolvedValue({
|
||||
tools: [
|
||||
{
|
||||
name: 'test-tool',
|
||||
description: 'Test tool',
|
||||
inputSchema: { type: 'object' },
|
||||
},
|
||||
],
|
||||
});
|
||||
mockClient.callTool.mockResolvedValue({
|
||||
content: [{ type: 'text', text: 'Success' }],
|
||||
});
|
||||
});
|
||||
|
||||
it('should connect on-demand server before tool invocation', async () => {
|
||||
const request = {
|
||||
params: {
|
||||
name: 'on-demand-server-test-tool',
|
||||
arguments: { arg1: 'value1' },
|
||||
},
|
||||
};
|
||||
|
||||
await handleCallToolRequest(request, {});
|
||||
|
||||
// Should connect before calling the tool
|
||||
expect(mockClient.connect).toHaveBeenCalledTimes(1);
|
||||
expect(mockClient.callTool).toHaveBeenCalledWith(
|
||||
{
|
||||
name: 'test-tool',
|
||||
arguments: { arg1: 'value1' },
|
||||
},
|
||||
undefined,
|
||||
expect.any(Object),
|
||||
);
|
||||
});
|
||||
|
||||
it('should disconnect on-demand server after tool invocation', async () => {
|
||||
const request = {
|
||||
params: {
|
||||
name: 'on-demand-server-test-tool',
|
||||
arguments: {},
|
||||
},
|
||||
};
|
||||
|
||||
await handleCallToolRequest(request, {});
|
||||
|
||||
// Should disconnect after calling the tool
|
||||
expect(mockTransport.close).toHaveBeenCalledTimes(1);
|
||||
expect(mockClient.close).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('should disconnect on-demand server even if tool invocation fails', async () => {
|
||||
mockClient.callTool.mockRejectedValueOnce(new Error('Tool execution failed'));
|
||||
|
||||
const request = {
|
||||
params: {
|
||||
name: 'on-demand-server-test-tool',
|
||||
arguments: {},
|
||||
},
|
||||
};
|
||||
|
||||
try {
|
||||
await handleCallToolRequest(request, {});
|
||||
} catch (error) {
|
||||
// Expected to fail
|
||||
}
|
||||
|
||||
// Should still disconnect after error
|
||||
expect(mockTransport.close).toHaveBeenCalledTimes(1);
|
||||
expect(mockClient.close).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('should return error for call_tool if server not found', async () => {
|
||||
const request = {
|
||||
params: {
|
||||
name: 'call_tool',
|
||||
arguments: {
|
||||
toolName: 'nonexistent-server-tool',
|
||||
arguments: {},
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const result = await handleCallToolRequest(request, {});
|
||||
|
||||
expect(result.isError).toBe(true);
|
||||
expect(result.content[0].text).toContain('No available servers found');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Mixed Server Modes', () => {
|
||||
it('should handle both persistent and on-demand servers together', async () => {
|
||||
mockServerDao.findAll.mockResolvedValue([
|
||||
{
|
||||
name: 'persistent-server',
|
||||
command: 'node',
|
||||
args: ['persistent.js'],
|
||||
enabled: true,
|
||||
},
|
||||
{
|
||||
name: 'on-demand-server',
|
||||
command: 'node',
|
||||
args: ['on-demand.js'],
|
||||
connectionMode: 'on-demand',
|
||||
enabled: true,
|
||||
},
|
||||
]);
|
||||
|
||||
const serverInfos = await initializeClientsFromSettings(true);
|
||||
|
||||
expect(serverInfos).toHaveLength(2);
|
||||
|
||||
const persistentServer = serverInfos.find(s => s.name === 'persistent-server');
|
||||
const onDemandServer = serverInfos.find(s => s.name === 'on-demand-server');
|
||||
|
||||
expect(persistentServer?.connectionMode).toBe('persistent');
|
||||
expect(onDemandServer?.connectionMode).toBe('on-demand');
|
||||
expect(onDemandServer?.status).toBe('disconnected');
|
||||
});
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user