mirror of
https://github.com/samanhappy/mcphub.git
synced 2026-01-01 04:08:52 -05:00
Compare commits
5 Commits
copilot/im
...
copilot/se
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
da6d217bb4 | ||
|
|
0017023192 | ||
|
|
e097c027be | ||
|
|
71958ef86b | ||
|
|
5e20b2c261 |
272
.github/copilot-instructions.md
vendored
Normal file
272
.github/copilot-instructions.md
vendored
Normal file
@@ -0,0 +1,272 @@
|
||||
# MCPHub Coding Instructions
|
||||
|
||||
**ALWAYS follow these instructions first and only fallback to additional search and context gathering if the information here is incomplete or found to be in error.**
|
||||
|
||||
## Project Overview
|
||||
|
||||
MCPHub is a TypeScript/Node.js MCP (Model Context Protocol) server management hub that provides unified access through HTTP endpoints. It serves as a centralized dashboard for managing multiple MCP servers with real-time monitoring, authentication, and flexible routing.
|
||||
|
||||
**Core Components:**
|
||||
|
||||
- **Backend**: Express.js + TypeScript + ESM (`src/server.ts`)
|
||||
- **Frontend**: React/Vite + Tailwind CSS (`frontend/`)
|
||||
- **MCP Integration**: Connects multiple MCP servers (`src/services/mcpService.ts`)
|
||||
- **Authentication**: JWT-based with bcrypt password hashing
|
||||
- **Configuration**: JSON-based MCP server definitions (`mcp_settings.json`)
|
||||
- **Documentation**: API docs and usage instructions(`docs/`)
|
||||
|
||||
## Working Effectively
|
||||
|
||||
### Bootstrap and Setup (CRITICAL - Follow Exact Steps)
|
||||
|
||||
```bash
|
||||
# Install pnpm if not available
|
||||
npm install -g pnpm
|
||||
|
||||
# Install dependencies - takes ~30 seconds
|
||||
pnpm install
|
||||
|
||||
# Setup environment (optional)
|
||||
cp .env.example .env
|
||||
|
||||
# Build and test to verify setup
|
||||
pnpm lint # ~3 seconds - NEVER CANCEL
|
||||
pnpm backend:build # ~5 seconds - NEVER CANCEL
|
||||
pnpm test:ci # ~16 seconds - NEVER CANCEL. Set timeout to 60+ seconds
|
||||
pnpm frontend:build # ~5 seconds - NEVER CANCEL
|
||||
pnpm build # ~10 seconds total - NEVER CANCEL. Set timeout to 60+ seconds
|
||||
```
|
||||
|
||||
**CRITICAL TIMING**: These commands are fast but NEVER CANCEL them. Always wait for completion.
|
||||
|
||||
### Development Environment
|
||||
|
||||
```bash
|
||||
# Start both backend and frontend (recommended for most development)
|
||||
pnpm dev # Backend on :3001, Frontend on :5173
|
||||
|
||||
# OR start separately (required on Windows, optional on Linux/macOS)
|
||||
# Terminal 1: Backend only
|
||||
pnpm backend:dev # Runs on port 3000 (or PORT env var)
|
||||
|
||||
# Terminal 2: Frontend only
|
||||
pnpm frontend:dev # Runs on port 5173, proxies API to backend
|
||||
```
|
||||
|
||||
**NEVER CANCEL**: Development servers may take 10-15 seconds to fully initialize all MCP servers.
|
||||
|
||||
### Build Commands (Production)
|
||||
|
||||
```bash
|
||||
# Full production build - takes ~10 seconds total
|
||||
pnpm build # NEVER CANCEL - Set timeout to 60+ seconds
|
||||
|
||||
# Individual builds
|
||||
pnpm backend:build # TypeScript compilation - ~5 seconds
|
||||
pnpm frontend:build # Vite build - ~5 seconds
|
||||
|
||||
# Start production server
|
||||
pnpm start # Requires dist/ and frontend/dist/ to exist
|
||||
```
|
||||
|
||||
### Testing and Validation
|
||||
|
||||
```bash
|
||||
# Run all tests - takes ~16 seconds with 73 tests
|
||||
pnpm test:ci # NEVER CANCEL - Set timeout to 60+ seconds
|
||||
|
||||
# Development testing
|
||||
pnpm test # Interactive mode
|
||||
pnpm test:watch # Watch mode for development
|
||||
pnpm test:coverage # With coverage report
|
||||
|
||||
# Code quality
|
||||
pnpm lint # ESLint - ~3 seconds
|
||||
pnpm format # Prettier formatting - ~3 seconds
|
||||
```
|
||||
|
||||
**CRITICAL**: All tests MUST pass before committing. Do not modify tests to make them pass unless specifically required for your changes.
|
||||
|
||||
## Manual Validation Requirements
|
||||
|
||||
**ALWAYS perform these validation steps after making changes:**
|
||||
|
||||
### 1. Basic Application Functionality
|
||||
|
||||
```bash
|
||||
# Start the application
|
||||
pnpm dev
|
||||
|
||||
# Verify backend responds (in another terminal)
|
||||
curl http://localhost:3000/api/health
|
||||
# Expected: Should return health status
|
||||
|
||||
# Verify frontend serves
|
||||
curl -I http://localhost:3000/
|
||||
# Expected: HTTP 200 OK with HTML content
|
||||
```
|
||||
|
||||
### 2. MCP Server Integration Test
|
||||
|
||||
```bash
|
||||
# Check MCP servers are loading (look for log messages)
|
||||
# Expected log output should include:
|
||||
# - "Successfully connected client for server: [name]"
|
||||
# - "Successfully listed [N] tools for server: [name]"
|
||||
# - Some servers may fail due to missing API keys (normal in dev)
|
||||
```
|
||||
|
||||
### 3. Build Verification
|
||||
|
||||
```bash
|
||||
# Verify production build works
|
||||
pnpm build
|
||||
node scripts/verify-dist.js
|
||||
# Expected: "✅ Verification passed! Frontend and backend dist files are present."
|
||||
```
|
||||
|
||||
**NEVER skip these validation steps**. If any fail, debug and fix before proceeding.
|
||||
|
||||
## Project Structure and Key Files
|
||||
|
||||
### Critical Backend Files
|
||||
|
||||
- `src/index.ts` - Application entry point
|
||||
- `src/server.ts` - Express server setup and middleware
|
||||
- `src/services/mcpService.ts` - **Core MCP server management logic**
|
||||
- `src/config/index.ts` - Configuration management
|
||||
- `src/routes/` - HTTP route definitions
|
||||
- `src/controllers/` - HTTP request handlers
|
||||
- `src/dao/` - Data access layer (supports JSON file & PostgreSQL)
|
||||
- `src/db/` - TypeORM entities & repositories (for PostgreSQL mode)
|
||||
- `src/types/index.ts` - TypeScript type definitions
|
||||
|
||||
### DAO Layer (Dual Data Source)
|
||||
|
||||
MCPHub supports **JSON file** (default) and **PostgreSQL** storage:
|
||||
|
||||
- Set `USE_DB=true` + `DB_URL=postgresql://...` to use database
|
||||
- When modifying data structures, update: `src/types/`, `src/dao/`, `src/db/entities/`, `src/db/repositories/`, `src/utils/migration.ts`
|
||||
- See `AGENTS.md` for detailed DAO modification checklist
|
||||
|
||||
### Critical Frontend Files
|
||||
|
||||
- `frontend/src/` - React application source
|
||||
- `frontend/src/pages/` - Page components (development entry point)
|
||||
- `frontend/src/components/` - Reusable UI components
|
||||
- `frontend/src/utils/fetchInterceptor.js` - Backend API interaction
|
||||
|
||||
### Configuration Files
|
||||
|
||||
- `mcp_settings.json` - **MCP server definitions and user accounts**
|
||||
- `package.json` - Dependencies and scripts
|
||||
- `tsconfig.json` - TypeScript configuration
|
||||
- `jest.config.cjs` - Test configuration
|
||||
- `.eslintrc.json` - Linting rules
|
||||
|
||||
### Docker and Deployment
|
||||
|
||||
- `Dockerfile` - Multi-stage build with Python base + Node.js
|
||||
- `entrypoint.sh` - Docker startup script
|
||||
- `bin/cli.js` - NPM package CLI entry point
|
||||
|
||||
## Development Process and Conventions
|
||||
|
||||
### Code Style Requirements
|
||||
|
||||
- **ESM modules**: Always use `.js` extensions in imports, not `.ts`
|
||||
- **English only**: All code comments must be written in English
|
||||
- **TypeScript strict**: Follow strict type checking rules
|
||||
- **Import style**: `import { something } from './file.js'` (note .js extension)
|
||||
|
||||
### Key Configuration Notes
|
||||
|
||||
- **MCP servers**: Defined in `mcp_settings.json` with command/args
|
||||
- **Endpoints**: `/mcp/{group|server}` and `/mcp/$smart` for routing
|
||||
- **i18n**: Frontend uses react-i18next with files in `locales/` folder
|
||||
- **Authentication**: JWT tokens with bcrypt password hashing
|
||||
- **Default credentials**: admin/admin123 (configured in mcp_settings.json)
|
||||
|
||||
### Development Entry Points
|
||||
|
||||
- **Add MCP server**: Modify `mcp_settings.json` and restart
|
||||
- **New API endpoint**: Add route in `src/routes/`, controller in `src/controllers/`
|
||||
- **Frontend feature**: Start from `frontend/src/pages/` or `frontend/src/components/`
|
||||
- **Add tests**: Follow patterns in `tests/` directory
|
||||
|
||||
### Common Development Tasks
|
||||
|
||||
#### Adding a new MCP server:
|
||||
|
||||
1. Add server definition to `mcp_settings.json`
|
||||
2. Restart backend to load new server
|
||||
3. Check logs for successful connection
|
||||
4. Test via dashboard or API endpoints
|
||||
|
||||
#### API development:
|
||||
|
||||
1. Define route in `src/routes/`
|
||||
2. Implement controller in `src/controllers/`
|
||||
3. Add types in `src/types/index.ts` if needed
|
||||
4. Write tests in `tests/controllers/`
|
||||
|
||||
#### Frontend development:
|
||||
|
||||
1. Create/modify components in `frontend/src/components/`
|
||||
2. Add pages in `frontend/src/pages/`
|
||||
3. Update routing if needed
|
||||
4. Test in development mode with `pnpm frontend:dev`
|
||||
|
||||
#### Documentation:
|
||||
|
||||
1. Update or add docs in `docs/` folder
|
||||
2. Ensure README.md reflects any major changes
|
||||
|
||||
## Validation and CI Requirements
|
||||
|
||||
### Before Committing - ALWAYS Run:
|
||||
|
||||
```bash
|
||||
pnpm lint # Must pass - ~3 seconds
|
||||
pnpm backend:build # Must compile - ~5 seconds
|
||||
pnpm test:ci # All tests must pass - ~16 seconds
|
||||
pnpm build # Full build must work - ~10 seconds
|
||||
```
|
||||
|
||||
**CRITICAL**: CI will fail if any of these commands fail. Fix issues locally first.
|
||||
|
||||
### CI Pipeline (.github/workflows/ci.yml)
|
||||
|
||||
- Runs on Node.js 20.x
|
||||
- Tests: linting, type checking, unit tests with coverage
|
||||
- **NEVER CANCEL**: CI builds may take 2-3 minutes total
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
- **"uvx command not found"**: Some MCP servers require `uvx` (Python package manager) - this is expected in development
|
||||
- **Port already in use**: Change PORT environment variable or kill existing processes
|
||||
- **Frontend not loading**: Ensure frontend was built with `pnpm frontend:build`
|
||||
- **MCP server connection failed**: Check server command/args in `mcp_settings.json`
|
||||
|
||||
### Build Failures
|
||||
|
||||
- **TypeScript errors**: Run `pnpm backend:build` to see compilation errors
|
||||
- **Test failures**: Run `pnpm test:verbose` for detailed test output
|
||||
- **Lint errors**: Run `pnpm lint` and fix reported issues
|
||||
|
||||
### Development Issues
|
||||
|
||||
- **Backend not starting**: Check for port conflicts, verify `mcp_settings.json` syntax
|
||||
- **Frontend proxy errors**: Ensure backend is running before starting frontend
|
||||
- **Hot reload not working**: Restart development server
|
||||
|
||||
## Performance Notes
|
||||
|
||||
- **Install time**: pnpm install takes ~30 seconds
|
||||
- **Build time**: Full build takes ~10 seconds
|
||||
- **Test time**: Complete test suite takes ~16 seconds
|
||||
- **Startup time**: Backend initialization takes 10-15 seconds (MCP server connections)
|
||||
|
||||
**Remember**: NEVER CANCEL any build or test commands. Always wait for completion even if they seem slow.
|
||||
386
AGENTS.md
386
AGENTS.md
@@ -1,214 +1,26 @@
|
||||
# MCPHub Development Guide & Agent Instructions
|
||||
# Repository Guidelines
|
||||
|
||||
**ALWAYS follow these instructions first and only fallback to additional search and context gathering if the information here is incomplete or found to be in error.**
|
||||
|
||||
This document serves as the primary reference for all contributors and AI agents working on `@samanhappy/mcphub`. It provides comprehensive guidance on code organization, development workflow, and project conventions.
|
||||
|
||||
## Project Overview
|
||||
|
||||
MCPHub is a TypeScript/Node.js MCP (Model Context Protocol) server management hub that provides unified access through HTTP endpoints. It serves as a centralized dashboard for managing multiple MCP servers with real-time monitoring, authentication, and flexible routing.
|
||||
|
||||
**Core Components:**
|
||||
|
||||
- **Backend**: Express.js + TypeScript + ESM (`src/server.ts`)
|
||||
- **Frontend**: React/Vite + Tailwind CSS (`frontend/`)
|
||||
- **MCP Integration**: Connects multiple MCP servers (`src/services/mcpService.ts`)
|
||||
- **Authentication**: JWT-based with bcrypt password hashing
|
||||
- **Configuration**: JSON-based MCP server definitions (`mcp_settings.json`)
|
||||
- **Documentation**: API docs and usage instructions(`docs/`)
|
||||
|
||||
## Bootstrap and Setup (CRITICAL - Follow Exact Steps)
|
||||
|
||||
```bash
|
||||
# Install pnpm if not available
|
||||
npm install -g pnpm
|
||||
|
||||
# Install dependencies - takes ~30 seconds
|
||||
pnpm install
|
||||
|
||||
# Setup environment (optional)
|
||||
cp .env.example .env
|
||||
|
||||
# Build and test to verify setup
|
||||
pnpm lint # ~3 seconds - NEVER CANCEL
|
||||
pnpm backend:build # ~5 seconds - NEVER CANCEL
|
||||
pnpm test:ci # ~16 seconds - NEVER CANCEL. Set timeout to 60+ seconds
|
||||
pnpm frontend:build # ~5 seconds - NEVER CANCEL
|
||||
pnpm build # ~10 seconds total - NEVER CANCEL. Set timeout to 60+ seconds
|
||||
```
|
||||
|
||||
**CRITICAL TIMING**: These commands are fast but NEVER CANCEL them. Always wait for completion.
|
||||
|
||||
## Manual Validation Requirements
|
||||
|
||||
**ALWAYS perform these validation steps after making changes:**
|
||||
|
||||
### 1. Basic Application Functionality
|
||||
|
||||
```bash
|
||||
# Start the application
|
||||
pnpm dev
|
||||
|
||||
# Verify backend responds (in another terminal)
|
||||
curl http://localhost:3000/api/health
|
||||
# Expected: Should return health status
|
||||
|
||||
# Verify frontend serves
|
||||
curl -I http://localhost:3000/
|
||||
# Expected: HTTP 200 OK with HTML content
|
||||
```
|
||||
|
||||
### 2. MCP Server Integration Test
|
||||
|
||||
```bash
|
||||
# Check MCP servers are loading (look for log messages)
|
||||
# Expected log output should include:
|
||||
# - "Successfully connected client for server: [name]"
|
||||
# - "Successfully listed [N] tools for server: [name]"
|
||||
# - Some servers may fail due to missing API keys (normal in dev)
|
||||
```
|
||||
|
||||
### 3. Build Verification
|
||||
|
||||
```bash
|
||||
# Verify production build works
|
||||
pnpm build
|
||||
node scripts/verify-dist.js
|
||||
# Expected: "✅ Verification passed! Frontend and backend dist files are present."
|
||||
```
|
||||
|
||||
**NEVER skip these validation steps**. If any fail, debug and fix before proceeding.
|
||||
These notes align current contributors around the code layout, daily commands, and collaboration habits that keep `@samanhappy/mcphub` moving quickly.
|
||||
|
||||
## Project Structure & Module Organization
|
||||
|
||||
### Critical Backend Files
|
||||
|
||||
- `src/index.ts` - Application entry point
|
||||
- `src/server.ts` - Express server setup and middleware (orchestrating HTTP bootstrap)
|
||||
- `src/services/mcpService.ts` - **Core MCP server management logic**
|
||||
- `src/config/index.ts` - Configuration management
|
||||
- `src/routes/` - HTTP route definitions
|
||||
- `src/controllers/` - HTTP request handlers
|
||||
- `src/dao/` - Data access layer (supports JSON file & PostgreSQL)
|
||||
- `src/db/` - TypeORM entities & repositories (for PostgreSQL mode)
|
||||
- `src/types/index.ts` - TypeScript type definitions and shared DTOs
|
||||
- `src/utils/` - Utility functions and helpers
|
||||
|
||||
### Critical Frontend Files
|
||||
|
||||
- `frontend/src/` - React application source (Vite + React dashboard)
|
||||
- `frontend/src/pages/` - Page components (development entry point)
|
||||
- `frontend/src/components/` - Reusable UI components
|
||||
- `frontend/src/utils/fetchInterceptor.js` - Backend API interaction
|
||||
- `frontend/public/` - Static assets
|
||||
|
||||
### Configuration Files
|
||||
|
||||
- `mcp_settings.json` - **MCP server definitions and user accounts**
|
||||
- `package.json` - Dependencies and scripts
|
||||
- `tsconfig.json` - TypeScript configuration
|
||||
- `jest.config.cjs` - Test configuration
|
||||
- `.eslintrc.json` - Linting rules
|
||||
|
||||
### Test Organization
|
||||
|
||||
- Jest-aware test code is split between colocated specs (`src/**/*.{test,spec}.ts`) and higher-level suites in `tests/`
|
||||
- Use `tests/utils/` helpers when exercising the CLI or SSE flows
|
||||
- Mirror production directory names when adding new suites
|
||||
- End filenames with `.test.ts` or `.spec.ts` for automatic discovery
|
||||
|
||||
### Build Artifacts
|
||||
|
||||
- `dist/` - Backend build output (TypeScript compilation)
|
||||
- `frontend/dist/` - Frontend build output (Vite bundle)
|
||||
- `coverage/` - Test coverage reports
|
||||
- **Never edit these manually**
|
||||
|
||||
### Localization
|
||||
|
||||
- Translations sit in `locales/` (en.json, fr.json, tr.json, zh.json)
|
||||
- Frontend uses react-i18next
|
||||
|
||||
### Docker and Deployment
|
||||
|
||||
- `Dockerfile` - Multi-stage build with Python base + Node.js
|
||||
- `entrypoint.sh` - Docker startup script
|
||||
- `bin/cli.js` - NPM package CLI entry point
|
||||
- Backend services live in `src`, grouped by responsibility (`controllers/`, `services/`, `dao/`, `routes/`, `utils/`), with `server.ts` orchestrating HTTP bootstrap.
|
||||
- `frontend/src` contains the Vite + React dashboard; `frontend/public` hosts static assets and translations sit in `locales/`.
|
||||
- Jest-aware test code is split between colocated specs (`src/**/*.{test,spec}.ts`) and higher-level suites in `tests/`; use `tests/utils/` helpers when exercising the CLI or SSE flows.
|
||||
- Build artifacts and bundles are generated into `dist/`, `frontend/dist/`, and `coverage/`; never edit these manually.
|
||||
|
||||
## Build, Test, and Development Commands
|
||||
|
||||
### Development Environment
|
||||
|
||||
```bash
|
||||
# Start both backend and frontend (recommended for most development)
|
||||
pnpm dev # Backend on :3001, Frontend on :5173
|
||||
|
||||
# OR start separately (required on Windows, optional on Linux/macOS)
|
||||
# Terminal 1: Backend only
|
||||
pnpm backend:dev # Runs on port 3000 (or PORT env var)
|
||||
|
||||
# Terminal 2: Frontend only
|
||||
pnpm frontend:dev # Runs on port 5173, proxies API to backend
|
||||
|
||||
# Frontend preview (production build)
|
||||
pnpm frontend:preview # Preview production build
|
||||
```
|
||||
|
||||
**NEVER CANCEL**: Development servers may take 10-15 seconds to fully initialize all MCP servers.
|
||||
|
||||
### Production Build
|
||||
|
||||
```bash
|
||||
# Full production build - takes ~10 seconds total
|
||||
pnpm build # NEVER CANCEL - Set timeout to 60+ seconds
|
||||
|
||||
# Individual builds
|
||||
pnpm backend:build # TypeScript compilation to dist/ - ~5 seconds
|
||||
pnpm frontend:build # Vite build to frontend/dist/ - ~5 seconds
|
||||
|
||||
# Start production server
|
||||
pnpm start # Requires dist/ and frontend/dist/ to exist
|
||||
```
|
||||
|
||||
Run `pnpm build` before release or publishing.
|
||||
|
||||
### Testing and Validation
|
||||
|
||||
```bash
|
||||
# Run all tests - takes ~16 seconds with 73 tests
|
||||
pnpm test:ci # NEVER CANCEL - Set timeout to 60+ seconds
|
||||
|
||||
# Development testing
|
||||
pnpm test # Interactive mode
|
||||
pnpm test:watch # Watch mode for development
|
||||
pnpm test:coverage # With coverage report
|
||||
|
||||
# Code quality
|
||||
pnpm lint # ESLint - ~3 seconds
|
||||
pnpm format # Prettier formatting - ~3 seconds
|
||||
```
|
||||
|
||||
**CRITICAL**: All tests MUST pass before committing. Do not modify tests to make them pass unless specifically required for your changes.
|
||||
|
||||
### Performance Notes
|
||||
|
||||
- **Install time**: pnpm install takes ~30 seconds
|
||||
- **Build time**: Full build takes ~10 seconds
|
||||
- **Test time**: Complete test suite takes ~16 seconds
|
||||
- **Startup time**: Backend initialization takes 10-15 seconds (MCP server connections)
|
||||
- `pnpm dev` runs backend (`tsx watch src/index.ts`) and frontend (`vite`) together for local iteration.
|
||||
- `pnpm backend:dev`, `pnpm frontend:dev`, and `pnpm frontend:preview` target each surface independently; prefer them when debugging one stack.
|
||||
- `pnpm build` executes `pnpm backend:build` (TypeScript to `dist/`) and `pnpm frontend:build`; run before release or publishing.
|
||||
- `pnpm test`, `pnpm test:watch`, and `pnpm test:coverage` drive Jest; `pnpm lint` and `pnpm format` enforce style via ESLint and Prettier.
|
||||
|
||||
## Coding Style & Naming Conventions
|
||||
|
||||
- **TypeScript everywhere**: Default to 2-space indentation and single quotes, letting Prettier settle formatting
|
||||
- **ESM modules**: Always use `.js` extensions in imports, not `.ts` (e.g., `import { something } from './file.js'`)
|
||||
- **English only**: All code comments must be written in English
|
||||
- **TypeScript strict**: Follow strict type checking rules
|
||||
- **Naming conventions**:
|
||||
- Services and data access layers: Use suffixes (`UserService`, `AuthDao`)
|
||||
- React components and files: `PascalCase`
|
||||
- Utility modules: `camelCase`
|
||||
- **Types and DTOs**: Keep in `src/types` to avoid duplication; re-export through index files only when it clarifies imports
|
||||
- **ESLint configuration**: Assumes ES modules
|
||||
- TypeScript everywhere; default to 2-space indentation and single quotes, letting Prettier settle formatting. ESLint configuration assumes ES modules.
|
||||
- Name services and data access layers with suffixes (`UserService`, `AuthDao`), React components and files in `PascalCase`, and utility modules in `camelCase`.
|
||||
- Keep DTOs and shared types in `src/types` to avoid duplication; re-export through index files only when it clarifies imports.
|
||||
|
||||
## Testing Guidelines
|
||||
|
||||
@@ -216,86 +28,12 @@ pnpm format # Prettier formatting - ~3 seconds
|
||||
- Mirror production directory names when adding new suites and end filenames with `.test.ts` or `.spec.ts` for automatic discovery.
|
||||
- Aim to maintain or raise coverage when touching critical flows (auth, OAuth, SSE); add integration tests under `tests/integration/` when touching cross-service logic.
|
||||
|
||||
## Key Configuration Notes
|
||||
|
||||
- **MCP servers**: Defined in `mcp_settings.json` with command/args
|
||||
- **Endpoints**: `/mcp/{group|server}` and `/mcp/$smart` for routing
|
||||
- **i18n**: Frontend uses react-i18next with files in `locales/` folder
|
||||
- **Authentication**: JWT tokens with bcrypt password hashing
|
||||
- **Default credentials**: admin/admin123 (configured in mcp_settings.json)
|
||||
|
||||
## Development Entry Points
|
||||
|
||||
### Adding a new MCP server
|
||||
|
||||
1. Add server definition to `mcp_settings.json`
|
||||
2. Restart backend to load new server
|
||||
3. Check logs for successful connection
|
||||
4. Test via dashboard or API endpoints
|
||||
|
||||
### API development
|
||||
|
||||
1. Define route in `src/routes/`
|
||||
2. Implement controller in `src/controllers/`
|
||||
3. Add types in `src/types/index.ts` if needed
|
||||
4. Write tests in `tests/controllers/`
|
||||
|
||||
### Frontend development
|
||||
|
||||
1. Create/modify components in `frontend/src/components/`
|
||||
2. Add pages in `frontend/src/pages/`
|
||||
3. Update routing if needed
|
||||
4. Test in development mode with `pnpm frontend:dev`
|
||||
|
||||
### Documentation
|
||||
|
||||
1. Update or add docs in `docs/` folder
|
||||
2. Ensure README.md reflects any major changes
|
||||
|
||||
## Commit & Pull Request Guidelines
|
||||
|
||||
- Follow the existing Conventional Commit pattern (`feat:`, `fix:`, `chore:`, etc.) with imperative, present-tense summaries and optional multi-line context.
|
||||
- Each PR should describe the behavior change, list testing performed, and link issues; include before/after screenshots or GIFs for frontend tweaks.
|
||||
- Re-run `pnpm build` and `pnpm test` before requesting review, and ensure generated artifacts stay out of the diff.
|
||||
|
||||
### Before Committing - ALWAYS Run
|
||||
|
||||
```bash
|
||||
pnpm lint # Must pass - ~3 seconds
|
||||
pnpm backend:build # Must compile - ~5 seconds
|
||||
pnpm test:ci # All tests must pass - ~16 seconds
|
||||
pnpm build # Full build must work - ~10 seconds
|
||||
```
|
||||
|
||||
**CRITICAL**: CI will fail if any of these commands fail. Fix issues locally first.
|
||||
|
||||
### CI Pipeline (.github/workflows/ci.yml)
|
||||
|
||||
- Runs on Node.js 20.x
|
||||
- Tests: linting, type checking, unit tests with coverage
|
||||
- **NEVER CANCEL**: CI builds may take 2-3 minutes total
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
- **"uvx command not found"**: Some MCP servers require `uvx` (Python package manager) - this is expected in development
|
||||
- **Port already in use**: Change PORT environment variable or kill existing processes
|
||||
- **Frontend not loading**: Ensure frontend was built with `pnpm frontend:build`
|
||||
- **MCP server connection failed**: Check server command/args in `mcp_settings.json`
|
||||
|
||||
### Build Failures
|
||||
|
||||
- **TypeScript errors**: Run `pnpm backend:build` to see compilation errors
|
||||
- **Test failures**: Run `pnpm test:verbose` for detailed test output
|
||||
- **Lint errors**: Run `pnpm lint` and fix reported issues
|
||||
|
||||
### Development Issues
|
||||
|
||||
- **Backend not starting**: Check for port conflicts, verify `mcp_settings.json` syntax
|
||||
- **Frontend proxy errors**: Ensure backend is running before starting frontend
|
||||
- **Hot reload not working**: Restart development server
|
||||
|
||||
## DAO Layer & Dual Data Source
|
||||
|
||||
MCPHub supports **JSON file** (default) and **PostgreSQL** storage. Set `USE_DB=true` + `DB_URL` to switch.
|
||||
@@ -325,100 +63,16 @@ When adding/changing fields, update **ALL** these files:
|
||||
|
||||
### Data Type Mapping
|
||||
|
||||
| Model | DAO | DB Entity | JSON Path |
|
||||
| -------------- | ----------------- | -------------- | ------------------------- |
|
||||
| `IUser` | `UserDao` | `User` | `settings.users[]` |
|
||||
| `ServerConfig` | `ServerDao` | `Server` | `settings.mcpServers{}` |
|
||||
| `IGroup` | `GroupDao` | `Group` | `settings.groups[]` |
|
||||
| `SystemConfig` | `SystemConfigDao` | `SystemConfig` | `settings.systemConfig` |
|
||||
| `UserConfig` | `UserConfigDao` | `UserConfig` | `settings.userConfigs{}` |
|
||||
| `BearerKey` | `BearerKeyDao` | `BearerKey` | `settings.bearerKeys[]` |
|
||||
| `IOAuthClient` | `OAuthClientDao` | `OAuthClient` | `settings.oauthClients[]` |
|
||||
| `IOAuthToken` | `OAuthTokenDao` | `OAuthToken` | `settings.oauthTokens[]` |
|
||||
| Model | DAO | DB Entity | JSON Path |
|
||||
| -------------- | ----------------- | -------------- | ------------------------ |
|
||||
| `IUser` | `UserDao` | `User` | `settings.users[]` |
|
||||
| `ServerConfig` | `ServerDao` | `Server` | `settings.mcpServers{}` |
|
||||
| `IGroup` | `GroupDao` | `Group` | `settings.groups[]` |
|
||||
| `SystemConfig` | `SystemConfigDao` | `SystemConfig` | `settings.systemConfig` |
|
||||
| `UserConfig` | `UserConfigDao` | `UserConfig` | `settings.userConfigs{}` |
|
||||
|
||||
### Common Pitfalls
|
||||
|
||||
- Forgetting migration script → fields won't migrate to DB
|
||||
- Optional fields need `nullable: true` in entity
|
||||
- Complex objects need `simple-json` column type
|
||||
|
||||
## Auto-Evolution Guidelines for AI Agents
|
||||
|
||||
**This section provides guidelines for AI agents to automatically maintain and improve this document.**
|
||||
|
||||
### When to Update AGENTS.md
|
||||
|
||||
AI agents MUST update this document in the following situations:
|
||||
|
||||
#### 1. Code-Documentation Mismatch Detected
|
||||
|
||||
When executing tasks, if you discover that:
|
||||
|
||||
- The actual code structure differs from descriptions in this document
|
||||
- File paths, imports, or module organization has changed
|
||||
- New critical files or directories exist that aren't documented
|
||||
- Documented files or patterns no longer exist
|
||||
|
||||
**Action**: Immediately update the relevant section to reflect the current codebase state.
|
||||
|
||||
**Example scenarios**:
|
||||
|
||||
- A controller is now in `src/api/controllers/` instead of `src/controllers/`
|
||||
- New middleware files exist that should be documented
|
||||
- The DAO implementation has been refactored with a different structure
|
||||
- Build output directories have changed
|
||||
|
||||
#### 2. User Preferences and Requirements
|
||||
|
||||
During conversation, if the user expresses:
|
||||
|
||||
- **Coding preferences**: Indentation style, naming conventions, code organization patterns
|
||||
- **Workflow requirements**: Required validation steps, commit procedures, testing expectations
|
||||
- **Tool preferences**: Preferred libraries, frameworks, or development tools
|
||||
- **Quality standards**: Code review criteria, documentation requirements, error handling patterns
|
||||
- **Development principles**: Architecture decisions, design patterns, best practices
|
||||
|
||||
**Action**: Add or update the relevant section to capture these preferences for future reference.
|
||||
|
||||
**Example scenarios**:
|
||||
|
||||
- User prefers async/await over promises → Update coding style section
|
||||
- User requires specific test coverage thresholds → Update testing guidelines
|
||||
- User has strong opinions about error handling → Add to development process section
|
||||
- User establishes new deployment procedures → Update deployment section
|
||||
|
||||
### How to Update AGENTS.md
|
||||
|
||||
1. **Identify the Section**: Determine which section needs updating based on the type of change
|
||||
2. **Make Precise Changes**: Update only the relevant content, maintaining the document structure
|
||||
3. **Preserve Format**: Keep the existing markdown formatting and organization
|
||||
4. **Add Context**: If adding new content, ensure it fits logically within existing sections
|
||||
5. **Verify Accuracy**: After updating, ensure the new information is accurate and complete
|
||||
|
||||
### Update Principles
|
||||
|
||||
- **Accuracy First**: Documentation must reflect the actual current state
|
||||
- **Clarity**: Use clear, concise language; avoid ambiguity
|
||||
- **Completeness**: Include sufficient detail for agents to work effectively
|
||||
- **Consistency**: Maintain consistent terminology and formatting throughout
|
||||
- **Actionability**: Focus on concrete, actionable guidance rather than vague descriptions
|
||||
|
||||
### Self-Correction Process
|
||||
|
||||
Before completing any task:
|
||||
|
||||
1. Review relevant sections of AGENTS.md
|
||||
2. During execution, note any discrepancies between documentation and reality
|
||||
3. Update AGENTS.md to correct discrepancies
|
||||
4. Verify the update doesn't conflict with other sections
|
||||
5. Proceed with the original task using the updated information
|
||||
|
||||
### Meta-Update Rule
|
||||
|
||||
If this auto-evolution section itself needs improvement based on experience:
|
||||
|
||||
- Update it to better serve future agents
|
||||
- Add new scenarios or principles as they emerge
|
||||
- Refine the update process based on what works well
|
||||
|
||||
**Remember**: This document is a living guide. Keeping it accurate and current is as important as following it.
|
||||
|
||||
205
IMPLEMENTATION_SUMMARY.md
Normal file
205
IMPLEMENTATION_SUMMARY.md
Normal file
@@ -0,0 +1,205 @@
|
||||
# Stream Parameter Implementation - Summary
|
||||
|
||||
## Overview
|
||||
Successfully implemented support for a `stream` parameter that allows clients to control whether MCP requests receive Server-Sent Events (SSE) streaming responses or direct JSON responses.
|
||||
|
||||
## Problem Statement (Original Question)
|
||||
> 分析源码,使用 http://localhost:8090/process 请求时,可以使用 stream : false 来设置非流式响应吗
|
||||
>
|
||||
> Translation: After analyzing the source code, when using the http://localhost:8090/process request, can we use stream: false to set non-streaming responses?
|
||||
|
||||
## Answer
|
||||
**Yes, absolutely!** While the endpoint path is `/mcp` (not `/process`), the implementation now fully supports using a `stream` parameter to control response format.
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Core Changes
|
||||
1. **Modified Functions:**
|
||||
- `createSessionWithId()` - Added `enableJsonResponse` parameter
|
||||
- `createNewSession()` - Added `enableJsonResponse` parameter
|
||||
- `handleMcpPostRequest()` - Added robust stream parameter parsing
|
||||
|
||||
2. **Parameter Parsing:**
|
||||
- Created `parseStreamParam()` helper function
|
||||
- Handles multiple input types: boolean, string, number
|
||||
- Consistent behavior for query and body parameters
|
||||
- Body parameter takes priority over query parameter
|
||||
|
||||
3. **Supported Values:**
|
||||
- **Truthy (streaming enabled):** `true`, `"true"`, `1`, `"1"`, `"yes"`, `"on"`
|
||||
- **Falsy (streaming disabled):** `false`, `"false"`, `0`, `"0"`, `"no"`, `"off"`
|
||||
- **Default:** `true` (streaming enabled) for backward compatibility
|
||||
|
||||
### Usage Examples
|
||||
|
||||
#### Query Parameter
|
||||
```bash
|
||||
# Disable streaming
|
||||
curl -X POST "http://localhost:3000/mcp?stream=false" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Accept: application/json, text/event-stream" \
|
||||
-d '{"method": "initialize", ...}'
|
||||
|
||||
# Enable streaming (default)
|
||||
curl -X POST "http://localhost:3000/mcp?stream=true" ...
|
||||
```
|
||||
|
||||
#### Request Body Parameter
|
||||
```json
|
||||
{
|
||||
"method": "initialize",
|
||||
"stream": false,
|
||||
"params": {
|
||||
"protocolVersion": "2025-03-26",
|
||||
"capabilities": {},
|
||||
"clientInfo": {
|
||||
"name": "TestClient",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
},
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1
|
||||
}
|
||||
```
|
||||
|
||||
#### All Route Variants
|
||||
```bash
|
||||
POST /mcp?stream=false # Global route
|
||||
POST /mcp/{group}?stream=false # Group route
|
||||
POST /mcp/{server}?stream=false # Server route
|
||||
POST /mcp/$smart?stream=false # Smart routing
|
||||
```
|
||||
|
||||
### Response Formats
|
||||
|
||||
#### Streaming Response (stream=true or default)
|
||||
```
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: text/event-stream
|
||||
mcp-session-id: 550e8400-e29b-41d4-a716-446655440000
|
||||
|
||||
data: {"jsonrpc":"2.0","result":{...},"id":1}
|
||||
|
||||
```
|
||||
|
||||
#### Non-Streaming Response (stream=false)
|
||||
```
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: application/json
|
||||
mcp-session-id: 550e8400-e29b-41d4-a716-446655440000
|
||||
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"result": {
|
||||
"protocolVersion": "2025-03-26",
|
||||
"capabilities": {...},
|
||||
"serverInfo": {...}
|
||||
},
|
||||
"id": 1
|
||||
}
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Test Coverage
|
||||
- **Unit Tests:** 12 tests in `src/services/sseService.test.ts`
|
||||
- Basic functionality (6 tests)
|
||||
- Edge cases (6 tests)
|
||||
- **Integration Tests:** 4 tests in `tests/integration/stream-parameter.test.ts`
|
||||
- **Total:** 207 tests passing (16 new tests added)
|
||||
|
||||
### Test Scenarios Covered
|
||||
1. ✓ Query parameter: stream=false
|
||||
2. ✓ Query parameter: stream=true
|
||||
3. ✓ Body parameter: stream=false
|
||||
4. ✓ Body parameter: stream=true
|
||||
5. ✓ Priority: body over query
|
||||
6. ✓ Default: no parameter provided
|
||||
7. ✓ Edge case: string "false", "0", "no", "off"
|
||||
8. ✓ Edge case: string "true", "1", "yes", "on"
|
||||
9. ✓ Edge case: number 0 and 1
|
||||
10. ✓ Edge case: invalid/unknown values
|
||||
|
||||
## Documentation
|
||||
|
||||
### Files Created/Updated
|
||||
1. **New Documentation:**
|
||||
- `docs/stream-parameter.md` - Comprehensive guide with examples and use cases
|
||||
|
||||
2. **Updated Documentation:**
|
||||
- `README.md` - Added link to stream parameter documentation
|
||||
- `README.zh.md` - Added link in Chinese README
|
||||
|
||||
3. **Test Documentation:**
|
||||
- `tests/integration/stream-parameter.test.ts` - Demonstrates usage patterns
|
||||
|
||||
### Documentation Topics Covered
|
||||
- Feature overview
|
||||
- Usage examples (query and body parameters)
|
||||
- Response format comparison
|
||||
- Use cases and when to use each mode
|
||||
- Technical implementation details
|
||||
- Backward compatibility notes
|
||||
- Route variant support
|
||||
- Limitations and considerations
|
||||
|
||||
## Quality Assurance
|
||||
|
||||
### Code Review
|
||||
- ✓ All code review comments addressed
|
||||
- ✓ No outstanding issues
|
||||
- ✓ Consistent parsing logic
|
||||
- ✓ Proper edge case handling
|
||||
|
||||
### Validation Results
|
||||
- ✓ All 207 tests passing
|
||||
- ✓ TypeScript compilation successful
|
||||
- ✓ ESLint checks passed
|
||||
- ✓ Full build completed successfully
|
||||
- ✓ No breaking changes
|
||||
- ✓ Backward compatible
|
||||
|
||||
## Impact Analysis
|
||||
|
||||
### Benefits
|
||||
1. **Flexibility:** Clients can choose response format based on their needs
|
||||
2. **Debugging:** Easier to debug with direct JSON responses
|
||||
3. **Integration:** Simpler integration with systems expecting JSON
|
||||
4. **Testing:** More straightforward to test and validate
|
||||
5. **Backward Compatible:** Existing clients continue to work without changes
|
||||
|
||||
### Performance Considerations
|
||||
- No performance impact on default streaming behavior
|
||||
- Non-streaming mode may have slightly less overhead for simple requests
|
||||
- Session management works identically in both modes
|
||||
|
||||
### Backward Compatibility
|
||||
- Default behavior unchanged (streaming enabled)
|
||||
- All existing clients work without modification
|
||||
- No breaking changes to API or protocol
|
||||
|
||||
## Future Considerations
|
||||
|
||||
### Potential Enhancements
|
||||
1. Add documentation for OpenAPI specification
|
||||
2. Consider adding a configuration option to set default behavior
|
||||
3. Add metrics/logging for stream parameter usage
|
||||
4. Consider adding response format negotiation via Accept header
|
||||
|
||||
### Known Limitations
|
||||
1. Stream parameter only affects POST requests to /mcp endpoint
|
||||
2. SSE GET requests for retrieving streams not affected
|
||||
3. Session rebuild operations inherit stream setting from original request
|
||||
|
||||
## Conclusion
|
||||
|
||||
The implementation successfully adds flexible stream control to the MCP protocol implementation while maintaining full backward compatibility. The robust parsing logic handles all common value formats, and comprehensive testing ensures reliable behavior across all scenarios.
|
||||
|
||||
**Status:** ✅ Complete and Production Ready
|
||||
|
||||
---
|
||||
*Implementation Date: December 25, 2025*
|
||||
*Total Development Time: ~2 hours*
|
||||
*Tests Added: 16*
|
||||
*Lines of Code Changed: ~200*
|
||||
*Documentation Pages: 1 comprehensive guide*
|
||||
@@ -78,6 +78,7 @@ http://localhost:3000/mcp/$smart # Smart routing
|
||||
| [Quick Start](https://docs.mcphubx.com/quickstart) | Get started in 5 minutes |
|
||||
| [Configuration](https://docs.mcphubx.com/configuration/mcp-settings) | MCP server configuration options |
|
||||
| [Database Mode](https://docs.mcphubx.com/configuration/database-configuration) | PostgreSQL setup for production |
|
||||
| [Stream Parameter](docs/stream-parameter.md) | Control streaming vs JSON responses |
|
||||
| [OAuth](https://docs.mcphubx.com/features/oauth) | OAuth 2.0 client and server setup |
|
||||
| [Smart Routing](https://docs.mcphubx.com/features/smart-routing) | AI-powered tool discovery |
|
||||
| [Docker Setup](https://docs.mcphubx.com/configuration/docker-setup) | Docker deployment guide |
|
||||
|
||||
@@ -78,6 +78,7 @@ http://localhost:3000/mcp/$smart # 智能路由
|
||||
| [快速开始](https://docs.mcphubx.com/zh/quickstart) | 5 分钟快速上手 |
|
||||
| [配置指南](https://docs.mcphubx.com/zh/configuration/mcp-settings) | MCP 服务器配置选项 |
|
||||
| [数据库模式](https://docs.mcphubx.com/zh/configuration/database-configuration) | PostgreSQL 生产环境配置 |
|
||||
| [Stream 参数](docs/stream-parameter.md) | 控制流式或 JSON 响应 |
|
||||
| [OAuth](https://docs.mcphubx.com/zh/features/oauth) | OAuth 2.0 客户端和服务端配置 |
|
||||
| [智能路由](https://docs.mcphubx.com/zh/features/smart-routing) | AI 驱动的工具发现 |
|
||||
| [Docker 部署](https://docs.mcphubx.com/zh/configuration/docker-setup) | Docker 部署指南 |
|
||||
|
||||
@@ -28,8 +28,7 @@
|
||||
"features/server-management",
|
||||
"features/group-management",
|
||||
"features/smart-routing",
|
||||
"features/oauth",
|
||||
"features/output-compression"
|
||||
"features/oauth"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -1,109 +0,0 @@
|
||||
---
|
||||
title: 'Output Compression'
|
||||
description: 'Reduce token consumption by compressing MCP tool outputs'
|
||||
---
|
||||
|
||||
# Output Compression
|
||||
|
||||
MCPHub provides an AI-powered compression mechanism to reduce token consumption from MCP tool outputs. This feature is particularly useful when dealing with large outputs that can significantly impact system efficiency and scalability.
|
||||
|
||||
## Overview
|
||||
|
||||
The compression feature uses a lightweight AI model (by default, `gpt-4o-mini`) to intelligently compress MCP tool outputs while preserving all essential information. This can help:
|
||||
|
||||
- **Reduce token overhead** by compressing verbose tool information
|
||||
- **Lower operational costs** associated with token consumption
|
||||
- **Improve performance** for downstream processing
|
||||
- **Better resource utilization** in resource-constrained environments
|
||||
|
||||
## Configuration
|
||||
|
||||
Add the compression configuration to your `systemConfig` section in `mcp_settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"systemConfig": {
|
||||
"compression": {
|
||||
"enabled": true,
|
||||
"model": "gpt-4o-mini",
|
||||
"maxInputTokens": 100000,
|
||||
"targetReductionRatio": 0.5
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
|--------|------|---------|-------------|
|
||||
| `enabled` | boolean | `false` | Enable or disable output compression |
|
||||
| `model` | string | `"gpt-4o-mini"` | AI model to use for compression |
|
||||
| `maxInputTokens` | number | `100000` | Maximum input tokens for compression |
|
||||
| `targetReductionRatio` | number | `0.5` | Target size reduction ratio (0.0-1.0) |
|
||||
|
||||
## Requirements
|
||||
|
||||
Output compression requires:
|
||||
|
||||
1. An OpenAI API key configured in the smart routing settings
|
||||
2. The compression feature must be explicitly enabled
|
||||
|
||||
### Setting up OpenAI API Key
|
||||
|
||||
Configure your OpenAI API key using environment variables or system configuration:
|
||||
|
||||
**Environment Variable:**
|
||||
```bash
|
||||
export OPENAI_API_KEY=your-api-key
|
||||
```
|
||||
|
||||
**Or in systemConfig:**
|
||||
```json
|
||||
{
|
||||
"systemConfig": {
|
||||
"smartRouting": {
|
||||
"openaiApiKey": "your-api-key",
|
||||
"openaiApiBaseUrl": "https://api.openai.com/v1"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Content Size Check**: When a tool call completes, the compression service checks if the output is large enough to benefit from compression (threshold is 10% of `maxInputTokens` or 1000 tokens, whichever is smaller)
|
||||
|
||||
2. **AI Compression**: If the content exceeds the threshold, it's sent to the configured AI model with instructions to compress while preserving essential information
|
||||
|
||||
3. **Size Validation**: The compressed result is compared with the original; if compression didn't reduce the size, the original content is used
|
||||
|
||||
4. **Error Handling**: If compression fails for any reason, the original content is returned unchanged
|
||||
|
||||
## Fallback Mechanism
|
||||
|
||||
The compression feature includes graceful degradation for several scenarios:
|
||||
|
||||
- **Compression disabled**: Original content is returned
|
||||
- **No API key**: Original content is returned with a warning
|
||||
- **Small content**: Content below threshold is not compressed
|
||||
- **API errors**: Original content is returned on any API failure
|
||||
- **Error responses**: Tool error responses are never compressed
|
||||
- **Non-text content**: Images and other media types are preserved as-is
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Start with defaults**: The default configuration provides a good balance between compression and quality
|
||||
|
||||
2. **Monitor results**: Review compressed outputs to ensure important information isn't lost
|
||||
|
||||
3. **Adjust threshold**: If you have consistently large outputs, consider lowering `targetReductionRatio` for more aggressive compression
|
||||
|
||||
4. **Use efficient models**: The default `gpt-4o-mini` provides a good balance of cost and quality; switch to `gpt-4o` if you need higher quality compression
|
||||
|
||||
## Limitations
|
||||
|
||||
- Compression adds latency due to the AI API call
|
||||
- API costs apply for each compression operation
|
||||
- Very short outputs won't be compressed (below threshold)
|
||||
- Binary/non-text content is not compressed
|
||||
177
docs/stream-parameter.md
Normal file
177
docs/stream-parameter.md
Normal file
@@ -0,0 +1,177 @@
|
||||
# Stream Parameter Support
|
||||
|
||||
MCPHub now supports controlling the response format of MCP requests through a `stream` parameter. This allows you to choose between Server-Sent Events (SSE) streaming responses and direct JSON responses.
|
||||
|
||||
## Overview
|
||||
|
||||
By default, MCP requests use SSE streaming for real-time communication. However, some use cases benefit from receiving complete JSON responses instead of streams. The `stream` parameter provides this flexibility.
|
||||
|
||||
## Usage
|
||||
|
||||
### Query Parameter
|
||||
|
||||
You can control streaming behavior by adding a `stream` query parameter to your MCP POST requests:
|
||||
|
||||
```bash
|
||||
# Disable streaming (receive JSON response)
|
||||
POST /mcp?stream=false
|
||||
|
||||
# Enable streaming (SSE response) - Default behavior
|
||||
POST /mcp?stream=true
|
||||
```
|
||||
|
||||
### Request Body Parameter
|
||||
|
||||
Alternatively, you can include the `stream` parameter in your request body:
|
||||
|
||||
```json
|
||||
{
|
||||
"method": "initialize",
|
||||
"params": {
|
||||
"protocolVersion": "2025-03-26",
|
||||
"capabilities": {},
|
||||
"clientInfo": {
|
||||
"name": "MyClient",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
},
|
||||
"stream": false,
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** The request body parameter takes priority over the query parameter if both are specified.
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Non-Streaming Request
|
||||
|
||||
```bash
|
||||
curl -X POST "http://localhost:3000/mcp?stream=false" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Accept: application/json, text/event-stream" \
|
||||
-d '{
|
||||
"method": "initialize",
|
||||
"params": {
|
||||
"protocolVersion": "2025-03-26",
|
||||
"capabilities": {},
|
||||
"clientInfo": {
|
||||
"name": "TestClient",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
},
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1
|
||||
}'
|
||||
```
|
||||
|
||||
Response (JSON):
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"result": {
|
||||
"protocolVersion": "2025-03-26",
|
||||
"capabilities": {
|
||||
"tools": {},
|
||||
"prompts": {}
|
||||
},
|
||||
"serverInfo": {
|
||||
"name": "MCPHub",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
},
|
||||
"id": 1
|
||||
}
|
||||
```
|
||||
|
||||
### Example 2: Streaming Request (Default)
|
||||
|
||||
```bash
|
||||
curl -X POST "http://localhost:3000/mcp" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Accept: application/json, text/event-stream" \
|
||||
-d '{
|
||||
"method": "initialize",
|
||||
"params": {
|
||||
"protocolVersion": "2025-03-26",
|
||||
"capabilities": {},
|
||||
"clientInfo": {
|
||||
"name": "TestClient",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
},
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1
|
||||
}'
|
||||
```
|
||||
|
||||
Response (SSE Stream):
|
||||
```
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: text/event-stream
|
||||
mcp-session-id: 550e8400-e29b-41d4-a716-446655440000
|
||||
|
||||
data: {"jsonrpc":"2.0","result":{...},"id":1}
|
||||
|
||||
```
|
||||
|
||||
## Use Cases
|
||||
|
||||
### When to Use `stream: false`
|
||||
|
||||
- **Simple Request-Response**: When you only need a single response without ongoing communication
|
||||
- **Debugging**: Easier to inspect complete JSON responses in tools like Postman or curl
|
||||
- **Testing**: Simpler to test and validate responses in automated tests
|
||||
- **Stateless Operations**: When you don't need to maintain session state between requests
|
||||
- **API Integration**: When integrating with systems that expect standard JSON responses
|
||||
|
||||
### When to Use `stream: true` (Default)
|
||||
|
||||
- **Real-time Communication**: When you need continuous updates or notifications
|
||||
- **Long-running Operations**: For operations that may take time and send progress updates
|
||||
- **Event-driven**: When your application architecture is event-based
|
||||
- **MCP Protocol Compliance**: For full MCP protocol compatibility with streaming support
|
||||
|
||||
## Technical Details
|
||||
|
||||
### Implementation
|
||||
|
||||
The `stream` parameter controls the `enableJsonResponse` option of the underlying `StreamableHTTPServerTransport`:
|
||||
|
||||
- `stream: true` → `enableJsonResponse: false` → SSE streaming response
|
||||
- `stream: false` → `enableJsonResponse: true` → Direct JSON response
|
||||
|
||||
### Backward Compatibility
|
||||
|
||||
The default behavior remains SSE streaming (`stream: true`) to maintain backward compatibility with existing clients. If the `stream` parameter is not specified, MCPHub will use streaming by default.
|
||||
|
||||
### Session Management
|
||||
|
||||
The stream parameter affects how sessions are created:
|
||||
|
||||
- **Streaming sessions**: Use SSE transport with session management
|
||||
- **Non-streaming sessions**: Use direct JSON responses with session management
|
||||
|
||||
Both modes support session IDs and can be used with the MCP session management features.
|
||||
|
||||
## Group and Server Routes
|
||||
|
||||
The stream parameter works with all MCP route variants:
|
||||
|
||||
- Global route: `/mcp?stream=false`
|
||||
- Group route: `/mcp/{group}?stream=false`
|
||||
- Server route: `/mcp/{server}?stream=false`
|
||||
- Smart routing: `/mcp/$smart?stream=false`
|
||||
|
||||
## Limitations
|
||||
|
||||
1. The `stream` parameter only affects POST requests to the `/mcp` endpoint
|
||||
2. SSE GET requests for retrieving streams are not affected by this parameter
|
||||
3. Session rebuild operations inherit the stream setting from the original request
|
||||
|
||||
## See Also
|
||||
|
||||
- [MCP Protocol Specification](https://spec.modelcontextprotocol.io/)
|
||||
- [API Reference](https://docs.mcphubx.com/api-reference)
|
||||
- [Configuration Guide](https://docs.mcphubx.com/configuration/mcp-settings)
|
||||
@@ -18,17 +18,7 @@ const EditServerForm = ({ server, onEdit, onCancel }: EditServerFormProps) => {
|
||||
try {
|
||||
setError(null);
|
||||
const encodedServerName = encodeURIComponent(server.name);
|
||||
|
||||
// Check if name is being changed
|
||||
const isRenaming = payload.name && payload.name !== server.name;
|
||||
|
||||
// Build the request body
|
||||
const requestBody = {
|
||||
config: payload.config,
|
||||
...(isRenaming ? { newName: payload.name } : {}),
|
||||
};
|
||||
|
||||
const result = await apiPut(`/servers/${encodedServerName}`, requestBody);
|
||||
const result = await apiPut(`/servers/${encodedServerName}`, payload);
|
||||
|
||||
if (!result.success) {
|
||||
// Use specific error message from the response if available
|
||||
|
||||
@@ -429,6 +429,7 @@ const ServerForm = ({
|
||||
className="shadow appearance-none border rounded w-full py-2 px-3 text-gray-700 leading-tight focus:outline-none focus:shadow-outline form-input"
|
||||
placeholder="e.g.: time-mcp"
|
||||
required
|
||||
disabled={isEdit}
|
||||
/>
|
||||
</div>
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ interface BearerKeyRowProps {
|
||||
name: string;
|
||||
token: string;
|
||||
enabled: boolean;
|
||||
accessType: 'all' | 'groups' | 'servers' | 'custom';
|
||||
accessType: 'all' | 'groups' | 'servers';
|
||||
allowedGroups: string;
|
||||
allowedServers: string;
|
||||
},
|
||||
@@ -47,7 +47,7 @@ const BearerKeyRow: React.FC<BearerKeyRowProps> = ({
|
||||
const [name, setName] = useState(keyData.name);
|
||||
const [token, setToken] = useState(keyData.token);
|
||||
const [enabled, setEnabled] = useState<boolean>(keyData.enabled);
|
||||
const [accessType, setAccessType] = useState<'all' | 'groups' | 'servers' | 'custom'>(
|
||||
const [accessType, setAccessType] = useState<'all' | 'groups' | 'servers'>(
|
||||
keyData.accessType || 'all',
|
||||
);
|
||||
const [selectedGroups, setSelectedGroups] = useState<string[]>(keyData.allowedGroups || []);
|
||||
@@ -105,13 +105,6 @@ const BearerKeyRow: React.FC<BearerKeyRowProps> = ({
|
||||
);
|
||||
return;
|
||||
}
|
||||
if (accessType === 'custom' && selectedGroups.length === 0 && selectedServers.length === 0) {
|
||||
showToast(
|
||||
t('settings.selectAtLeastOneGroupOrServer') || 'Please select at least one group or server',
|
||||
'error',
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
setSaving(true);
|
||||
try {
|
||||
@@ -142,31 +135,6 @@ const BearerKeyRow: React.FC<BearerKeyRowProps> = ({
|
||||
};
|
||||
|
||||
const isGroupsMode = accessType === 'groups';
|
||||
const isCustomMode = accessType === 'custom';
|
||||
|
||||
// Helper function to format access type display text
|
||||
const formatAccessTypeDisplay = (key: BearerKey): string => {
|
||||
if (key.accessType === 'all') {
|
||||
return t('settings.bearerKeyAccessAll') || 'All Resources';
|
||||
}
|
||||
if (key.accessType === 'groups') {
|
||||
return `${t('settings.bearerKeyAccessGroups') || 'Groups'}: ${key.allowedGroups}`;
|
||||
}
|
||||
if (key.accessType === 'servers') {
|
||||
return `${t('settings.bearerKeyAccessServers') || 'Servers'}: ${key.allowedServers}`;
|
||||
}
|
||||
if (key.accessType === 'custom') {
|
||||
const parts: string[] = [];
|
||||
if (key.allowedGroups && key.allowedGroups.length > 0) {
|
||||
parts.push(`${t('settings.bearerKeyAccessGroups') || 'Groups'}: ${key.allowedGroups}`);
|
||||
}
|
||||
if (key.allowedServers && key.allowedServers.length > 0) {
|
||||
parts.push(`${t('settings.bearerKeyAccessServers') || 'Servers'}: ${key.allowedServers}`);
|
||||
}
|
||||
return `${t('settings.bearerKeyAccessCustom') || 'Custom'}: ${parts.join('; ')}`;
|
||||
}
|
||||
return '';
|
||||
};
|
||||
|
||||
if (isEditing) {
|
||||
return (
|
||||
@@ -226,9 +194,7 @@ const BearerKeyRow: React.FC<BearerKeyRowProps> = ({
|
||||
<select
|
||||
className="block w-full py-2 px-3 border border-gray-300 bg-white rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm form-select transition-shadow duration-200"
|
||||
value={accessType}
|
||||
onChange={(e) =>
|
||||
setAccessType(e.target.value as 'all' | 'groups' | 'servers' | 'custom')
|
||||
}
|
||||
onChange={(e) => setAccessType(e.target.value as 'all' | 'groups' | 'servers')}
|
||||
disabled={loading}
|
||||
>
|
||||
<option value="all">{t('settings.bearerKeyAccessAll') || 'All Resources'}</option>
|
||||
@@ -238,65 +204,29 @@ const BearerKeyRow: React.FC<BearerKeyRowProps> = ({
|
||||
<option value="servers">
|
||||
{t('settings.bearerKeyAccessServers') || 'Specific Servers'}
|
||||
</option>
|
||||
<option value="custom">
|
||||
{t('settings.bearerKeyAccessCustom') || 'Custom (Groups & Servers)'}
|
||||
</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
{/* Show single selector for groups or servers mode */}
|
||||
{!isCustomMode && (
|
||||
<div className="flex-1 min-w-[200px]">
|
||||
<label
|
||||
className={`block text-sm font-medium mb-1 ${accessType === 'all' ? 'text-gray-400' : 'text-gray-700'}`}
|
||||
>
|
||||
{isGroupsMode
|
||||
? t('settings.bearerKeyAllowedGroups') || 'Allowed groups'
|
||||
: t('settings.bearerKeyAllowedServers') || 'Allowed servers'}
|
||||
</label>
|
||||
<MultiSelect
|
||||
options={isGroupsMode ? availableGroups : availableServers}
|
||||
selected={isGroupsMode ? selectedGroups : selectedServers}
|
||||
onChange={isGroupsMode ? setSelectedGroups : setSelectedServers}
|
||||
placeholder={
|
||||
isGroupsMode
|
||||
? t('settings.selectGroups') || 'Select groups...'
|
||||
: t('settings.selectServers') || 'Select servers...'
|
||||
}
|
||||
disabled={loading || accessType === 'all'}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Show both selectors for custom mode */}
|
||||
{isCustomMode && (
|
||||
<>
|
||||
<div className="flex-1 min-w-[200px]">
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.bearerKeyAllowedGroups') || 'Allowed groups'}
|
||||
</label>
|
||||
<MultiSelect
|
||||
options={availableGroups}
|
||||
selected={selectedGroups}
|
||||
onChange={setSelectedGroups}
|
||||
placeholder={t('settings.selectGroups') || 'Select groups...'}
|
||||
disabled={loading}
|
||||
/>
|
||||
</div>
|
||||
<div className="flex-1 min-w-[200px]">
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.bearerKeyAllowedServers') || 'Allowed servers'}
|
||||
</label>
|
||||
<MultiSelect
|
||||
options={availableServers}
|
||||
selected={selectedServers}
|
||||
onChange={setSelectedServers}
|
||||
placeholder={t('settings.selectServers') || 'Select servers...'}
|
||||
disabled={loading}
|
||||
/>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
<div className="flex-1 min-w-[200px]">
|
||||
<label
|
||||
className={`block text-sm font-medium mb-1 ${accessType === 'all' ? 'text-gray-400' : 'text-gray-700'}`}
|
||||
>
|
||||
{isGroupsMode
|
||||
? t('settings.bearerKeyAllowedGroups') || 'Allowed groups'
|
||||
: t('settings.bearerKeyAllowedServers') || 'Allowed servers'}
|
||||
</label>
|
||||
<MultiSelect
|
||||
options={isGroupsMode ? availableGroups : availableServers}
|
||||
selected={isGroupsMode ? selectedGroups : selectedServers}
|
||||
onChange={isGroupsMode ? setSelectedGroups : setSelectedServers}
|
||||
placeholder={
|
||||
isGroupsMode
|
||||
? t('settings.selectGroups') || 'Select groups...'
|
||||
: t('settings.selectServers') || 'Select servers...'
|
||||
}
|
||||
disabled={loading || accessType === 'all'}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="flex justify-end gap-2">
|
||||
<button
|
||||
@@ -351,7 +281,11 @@ const BearerKeyRow: React.FC<BearerKeyRowProps> = ({
|
||||
</span>
|
||||
</td>
|
||||
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-500">
|
||||
{formatAccessTypeDisplay(keyData)}
|
||||
{keyData.accessType === 'all'
|
||||
? t('settings.bearerKeyAccessAll') || 'All Resources'
|
||||
: keyData.accessType === 'groups'
|
||||
? `${t('settings.bearerKeyAccessGroups') || 'Groups'}: ${keyData.allowedGroups}`
|
||||
: `${t('settings.bearerKeyAccessServers') || 'Servers'}: ${keyData.allowedServers}`}
|
||||
</td>
|
||||
<td className="px-6 py-4 whitespace-nowrap text-right text-sm font-medium">
|
||||
<button
|
||||
@@ -803,7 +737,7 @@ const SettingsPage: React.FC = () => {
|
||||
name: string;
|
||||
token: string;
|
||||
enabled: boolean;
|
||||
accessType: 'all' | 'groups' | 'servers' | 'custom';
|
||||
accessType: 'all' | 'groups' | 'servers';
|
||||
allowedGroups: string;
|
||||
allowedServers: string;
|
||||
}>({
|
||||
@@ -831,10 +765,10 @@ const SettingsPage: React.FC = () => {
|
||||
|
||||
// Reset selected arrays when accessType changes
|
||||
useEffect(() => {
|
||||
if (newBearerKey.accessType !== 'groups' && newBearerKey.accessType !== 'custom') {
|
||||
if (newBearerKey.accessType !== 'groups') {
|
||||
setNewSelectedGroups([]);
|
||||
}
|
||||
if (newBearerKey.accessType !== 'servers' && newBearerKey.accessType !== 'custom') {
|
||||
if (newBearerKey.accessType !== 'servers') {
|
||||
setNewSelectedServers([]);
|
||||
}
|
||||
}, [newBearerKey.accessType]);
|
||||
@@ -932,17 +866,6 @@ const SettingsPage: React.FC = () => {
|
||||
);
|
||||
return;
|
||||
}
|
||||
if (
|
||||
newBearerKey.accessType === 'custom' &&
|
||||
newSelectedGroups.length === 0 &&
|
||||
newSelectedServers.length === 0
|
||||
) {
|
||||
showToast(
|
||||
t('settings.selectAtLeastOneGroupOrServer') || 'Please select at least one group or server',
|
||||
'error',
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
await createBearerKey({
|
||||
name: newBearerKey.name,
|
||||
@@ -950,13 +873,11 @@ const SettingsPage: React.FC = () => {
|
||||
enabled: newBearerKey.enabled,
|
||||
accessType: newBearerKey.accessType,
|
||||
allowedGroups:
|
||||
(newBearerKey.accessType === 'groups' || newBearerKey.accessType === 'custom') &&
|
||||
newSelectedGroups.length > 0
|
||||
newBearerKey.accessType === 'groups' && newSelectedGroups.length > 0
|
||||
? newSelectedGroups
|
||||
: undefined,
|
||||
allowedServers:
|
||||
(newBearerKey.accessType === 'servers' || newBearerKey.accessType === 'custom') &&
|
||||
newSelectedServers.length > 0
|
||||
newBearerKey.accessType === 'servers' && newSelectedServers.length > 0
|
||||
? newSelectedServers
|
||||
: undefined,
|
||||
} as any);
|
||||
@@ -980,7 +901,7 @@ const SettingsPage: React.FC = () => {
|
||||
name: string;
|
||||
token: string;
|
||||
enabled: boolean;
|
||||
accessType: 'all' | 'groups' | 'servers' | 'custom';
|
||||
accessType: 'all' | 'groups' | 'servers';
|
||||
allowedGroups: string;
|
||||
allowedServers: string;
|
||||
},
|
||||
@@ -1207,7 +1128,7 @@ const SettingsPage: React.FC = () => {
|
||||
onChange={(e) =>
|
||||
setNewBearerKey((prev) => ({
|
||||
...prev,
|
||||
accessType: e.target.value as 'all' | 'groups' | 'servers' | 'custom',
|
||||
accessType: e.target.value as 'all' | 'groups' | 'servers',
|
||||
}))
|
||||
}
|
||||
disabled={loading}
|
||||
@@ -1221,75 +1142,41 @@ const SettingsPage: React.FC = () => {
|
||||
<option value="servers">
|
||||
{t('settings.bearerKeyAccessServers') || 'Specific Servers'}
|
||||
</option>
|
||||
<option value="custom">
|
||||
{t('settings.bearerKeyAccessCustom') || 'Custom (Groups & Servers)'}
|
||||
</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
{newBearerKey.accessType !== 'custom' && (
|
||||
<div className="flex-1 min-w-[200px]">
|
||||
<label
|
||||
className={`block text-sm font-medium mb-1 ${newBearerKey.accessType === 'all' ? 'text-gray-400' : 'text-gray-700'}`}
|
||||
>
|
||||
{newBearerKey.accessType === 'groups'
|
||||
? t('settings.bearerKeyAllowedGroups') || 'Allowed groups'
|
||||
: t('settings.bearerKeyAllowedServers') || 'Allowed servers'}
|
||||
</label>
|
||||
<MultiSelect
|
||||
options={
|
||||
newBearerKey.accessType === 'groups'
|
||||
? availableGroups
|
||||
: availableServers
|
||||
}
|
||||
selected={
|
||||
newBearerKey.accessType === 'groups'
|
||||
? newSelectedGroups
|
||||
: newSelectedServers
|
||||
}
|
||||
onChange={
|
||||
newBearerKey.accessType === 'groups'
|
||||
? setNewSelectedGroups
|
||||
: setNewSelectedServers
|
||||
}
|
||||
placeholder={
|
||||
newBearerKey.accessType === 'groups'
|
||||
? t('settings.selectGroups') || 'Select groups...'
|
||||
: t('settings.selectServers') || 'Select servers...'
|
||||
}
|
||||
disabled={loading || newBearerKey.accessType === 'all'}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{newBearerKey.accessType === 'custom' && (
|
||||
<>
|
||||
<div className="flex-1 min-w-[200px]">
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.bearerKeyAllowedGroups') || 'Allowed groups'}
|
||||
</label>
|
||||
<MultiSelect
|
||||
options={availableGroups}
|
||||
selected={newSelectedGroups}
|
||||
onChange={setNewSelectedGroups}
|
||||
placeholder={t('settings.selectGroups') || 'Select groups...'}
|
||||
disabled={loading}
|
||||
/>
|
||||
</div>
|
||||
<div className="flex-1 min-w-[200px]">
|
||||
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||
{t('settings.bearerKeyAllowedServers') || 'Allowed servers'}
|
||||
</label>
|
||||
<MultiSelect
|
||||
options={availableServers}
|
||||
selected={newSelectedServers}
|
||||
onChange={setNewSelectedServers}
|
||||
placeholder={t('settings.selectServers') || 'Select servers...'}
|
||||
disabled={loading}
|
||||
/>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
<div className="flex-1 min-w-[200px]">
|
||||
<label
|
||||
className={`block text-sm font-medium mb-1 ${newBearerKey.accessType === 'all' ? 'text-gray-400' : 'text-gray-700'}`}
|
||||
>
|
||||
{newBearerKey.accessType === 'groups'
|
||||
? t('settings.bearerKeyAllowedGroups') || 'Allowed groups'
|
||||
: t('settings.bearerKeyAllowedServers') || 'Allowed servers'}
|
||||
</label>
|
||||
<MultiSelect
|
||||
options={
|
||||
newBearerKey.accessType === 'groups'
|
||||
? availableGroups
|
||||
: availableServers
|
||||
}
|
||||
selected={
|
||||
newBearerKey.accessType === 'groups'
|
||||
? newSelectedGroups
|
||||
: newSelectedServers
|
||||
}
|
||||
onChange={
|
||||
newBearerKey.accessType === 'groups'
|
||||
? setNewSelectedGroups
|
||||
: setNewSelectedServers
|
||||
}
|
||||
placeholder={
|
||||
newBearerKey.accessType === 'groups'
|
||||
? t('settings.selectGroups') || 'Select groups...'
|
||||
: t('settings.selectServers') || 'Select servers...'
|
||||
}
|
||||
disabled={loading || newBearerKey.accessType === 'all'}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="flex justify-end gap-2">
|
||||
<button
|
||||
|
||||
@@ -310,7 +310,7 @@ export interface ApiResponse<T = any> {
|
||||
}
|
||||
|
||||
// Bearer authentication key configuration (frontend view model)
|
||||
export type BearerKeyAccessType = 'all' | 'groups' | 'servers' | 'custom';
|
||||
export type BearerKeyAccessType = 'all' | 'groups' | 'servers';
|
||||
|
||||
export interface BearerKey {
|
||||
id: string;
|
||||
|
||||
@@ -568,7 +568,6 @@
|
||||
"bearerKeyAccessAll": "All",
|
||||
"bearerKeyAccessGroups": "Groups",
|
||||
"bearerKeyAccessServers": "Servers",
|
||||
"bearerKeyAccessCustom": "Custom",
|
||||
"bearerKeyAllowedGroups": "Allowed groups",
|
||||
"bearerKeyAllowedServers": "Allowed servers",
|
||||
"addBearerKey": "Add key",
|
||||
|
||||
@@ -569,7 +569,6 @@
|
||||
"bearerKeyAccessAll": "Toutes",
|
||||
"bearerKeyAccessGroups": "Groupes",
|
||||
"bearerKeyAccessServers": "Serveurs",
|
||||
"bearerKeyAccessCustom": "Personnalisée",
|
||||
"bearerKeyAllowedGroups": "Groupes autorisés",
|
||||
"bearerKeyAllowedServers": "Serveurs autorisés",
|
||||
"addBearerKey": "Ajouter une clé",
|
||||
|
||||
@@ -569,7 +569,6 @@
|
||||
"bearerKeyAccessAll": "Tümü",
|
||||
"bearerKeyAccessGroups": "Gruplar",
|
||||
"bearerKeyAccessServers": "Sunucular",
|
||||
"bearerKeyAccessCustom": "Özel",
|
||||
"bearerKeyAllowedGroups": "İzin verilen gruplar",
|
||||
"bearerKeyAllowedServers": "İzin verilen sunucular",
|
||||
"addBearerKey": "Anahtar ekle",
|
||||
|
||||
@@ -570,7 +570,6 @@
|
||||
"bearerKeyAccessAll": "全部",
|
||||
"bearerKeyAccessGroups": "指定分组",
|
||||
"bearerKeyAccessServers": "指定服务器",
|
||||
"bearerKeyAccessCustom": "自定义",
|
||||
"bearerKeyAllowedGroups": "允许访问的分组",
|
||||
"bearerKeyAllowedServers": "允许访问的服务器",
|
||||
"addBearerKey": "新增密钥",
|
||||
|
||||
@@ -63,6 +63,5 @@
|
||||
"requiresAuthentication": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"bearerKeys": []
|
||||
}
|
||||
}
|
||||
@@ -57,7 +57,7 @@ export const createBearerKey = async (req: Request, res: Response): Promise<void
|
||||
return;
|
||||
}
|
||||
|
||||
if (!accessType || !['all', 'groups', 'servers', 'custom'].includes(accessType)) {
|
||||
if (!accessType || !['all', 'groups', 'servers'].includes(accessType)) {
|
||||
res.status(400).json({ success: false, message: 'Invalid accessType' });
|
||||
return;
|
||||
}
|
||||
@@ -104,7 +104,7 @@ export const updateBearerKey = async (req: Request, res: Response): Promise<void
|
||||
if (token !== undefined) updates.token = token;
|
||||
if (enabled !== undefined) updates.enabled = enabled;
|
||||
if (accessType !== undefined) {
|
||||
if (!['all', 'groups', 'servers', 'custom'].includes(accessType)) {
|
||||
if (!['all', 'groups', 'servers'].includes(accessType)) {
|
||||
res.status(400).json({ success: false, message: 'Invalid accessType' });
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -423,7 +423,7 @@ export const deleteServer = async (req: Request, res: Response): Promise<void> =
|
||||
export const updateServer = async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { name } = req.params;
|
||||
const { config, newName } = req.body;
|
||||
const { config } = req.body;
|
||||
if (!name) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
@@ -510,52 +510,12 @@ export const updateServer = async (req: Request, res: Response): Promise<void> =
|
||||
config.owner = currentUser?.username || 'admin';
|
||||
}
|
||||
|
||||
// Check if server name is being changed
|
||||
const isRenaming = newName && newName !== name;
|
||||
|
||||
// If renaming, validate the new name and update references
|
||||
if (isRenaming) {
|
||||
const serverDao = getServerDao();
|
||||
|
||||
// Check if new name already exists
|
||||
if (await serverDao.exists(newName)) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
message: `Server name '${newName}' already exists`,
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Rename the server
|
||||
const renamed = await serverDao.rename(name, newName);
|
||||
if (!renamed) {
|
||||
res.status(404).json({
|
||||
success: false,
|
||||
message: 'Server not found',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Update references in groups
|
||||
const groupDao = getGroupDao();
|
||||
await groupDao.updateServerName(name, newName);
|
||||
|
||||
// Update references in bearer keys
|
||||
const bearerKeyDao = getBearerKeyDao();
|
||||
await bearerKeyDao.updateServerName(name, newName);
|
||||
}
|
||||
|
||||
// Use the final server name (new name if renaming, otherwise original name)
|
||||
const finalName = isRenaming ? newName : name;
|
||||
|
||||
const result = await addOrUpdateServer(finalName, config, true); // Allow override for updates
|
||||
const result = await addOrUpdateServer(name, config, true); // Allow override for updates
|
||||
if (result.success) {
|
||||
notifyToolChanged(finalName);
|
||||
notifyToolChanged(name);
|
||||
res.json({
|
||||
success: true,
|
||||
message: isRenaming
|
||||
? `Server renamed and updated successfully`
|
||||
: 'Server updated successfully',
|
||||
message: 'Server updated successfully',
|
||||
});
|
||||
} else {
|
||||
res.status(404).json({
|
||||
@@ -564,10 +524,9 @@ export const updateServer = async (req: Request, res: Response): Promise<void> =
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to update server:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
message: error instanceof Error ? error.message : 'Internal server error',
|
||||
message: 'Internal server error',
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
@@ -13,10 +13,6 @@ export interface BearerKeyDao {
|
||||
create(data: Omit<BearerKey, 'id'>): Promise<BearerKey>;
|
||||
update(id: string, data: Partial<Omit<BearerKey, 'id'>>): Promise<BearerKey | null>;
|
||||
delete(id: string): Promise<boolean>;
|
||||
/**
|
||||
* Update server name in all bearer keys (when server is renamed)
|
||||
*/
|
||||
updateServerName(oldName: string, newName: string): Promise<number>;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -126,34 +122,4 @@ export class BearerKeyDaoImpl extends JsonFileBaseDao implements BearerKeyDao {
|
||||
await this.saveKeys(next);
|
||||
return true;
|
||||
}
|
||||
|
||||
async updateServerName(oldName: string, newName: string): Promise<number> {
|
||||
const keys = await this.loadKeysWithMigration();
|
||||
let updatedCount = 0;
|
||||
|
||||
for (const key of keys) {
|
||||
let updated = false;
|
||||
|
||||
if (key.allowedServers && key.allowedServers.length > 0) {
|
||||
const newServers = key.allowedServers.map((server) => {
|
||||
if (server === oldName) {
|
||||
updated = true;
|
||||
return newName;
|
||||
}
|
||||
return server;
|
||||
});
|
||||
|
||||
if (updated) {
|
||||
key.allowedServers = newServers;
|
||||
updatedCount++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (updatedCount > 0) {
|
||||
await this.saveKeys(keys);
|
||||
}
|
||||
|
||||
return updatedCount;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -74,30 +74,4 @@ export class BearerKeyDaoDbImpl implements BearerKeyDao {
|
||||
async delete(id: string): Promise<boolean> {
|
||||
return await this.repository.delete(id);
|
||||
}
|
||||
|
||||
async updateServerName(oldName: string, newName: string): Promise<number> {
|
||||
const allKeys = await this.repository.findAll();
|
||||
let updatedCount = 0;
|
||||
|
||||
for (const key of allKeys) {
|
||||
let updated = false;
|
||||
|
||||
if (key.allowedServers && key.allowedServers.length > 0) {
|
||||
const newServers = key.allowedServers.map((server) => {
|
||||
if (server === oldName) {
|
||||
updated = true;
|
||||
return newName;
|
||||
}
|
||||
return server;
|
||||
});
|
||||
|
||||
if (updated) {
|
||||
await this.repository.update(key.id, { allowedServers: newServers });
|
||||
updatedCount++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return updatedCount;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -36,11 +36,6 @@ export interface GroupDao extends BaseDao<IGroup, string> {
|
||||
* Find group by name
|
||||
*/
|
||||
findByName(name: string): Promise<IGroup | null>;
|
||||
|
||||
/**
|
||||
* Update server name in all groups (when server is renamed)
|
||||
*/
|
||||
updateServerName(oldName: string, newName: string): Promise<number>;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -223,39 +218,4 @@ export class GroupDaoImpl extends JsonFileBaseDao implements GroupDao {
|
||||
const groups = await this.getAll();
|
||||
return groups.find((group) => group.name === name) || null;
|
||||
}
|
||||
|
||||
async updateServerName(oldName: string, newName: string): Promise<number> {
|
||||
const groups = await this.getAll();
|
||||
let updatedCount = 0;
|
||||
|
||||
for (const group of groups) {
|
||||
let updated = false;
|
||||
const newServers = group.servers.map((server) => {
|
||||
if (typeof server === 'string') {
|
||||
if (server === oldName) {
|
||||
updated = true;
|
||||
return newName;
|
||||
}
|
||||
return server;
|
||||
} else {
|
||||
if (server.name === oldName) {
|
||||
updated = true;
|
||||
return { ...server, name: newName };
|
||||
}
|
||||
return server;
|
||||
}
|
||||
}) as IGroup['servers'];
|
||||
|
||||
if (updated) {
|
||||
group.servers = newServers;
|
||||
updatedCount++;
|
||||
}
|
||||
}
|
||||
|
||||
if (updatedCount > 0) {
|
||||
await this.saveAll(groups);
|
||||
}
|
||||
|
||||
return updatedCount;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -151,35 +151,4 @@ export class GroupDaoDbImpl implements GroupDao {
|
||||
owner: group.owner,
|
||||
};
|
||||
}
|
||||
|
||||
async updateServerName(oldName: string, newName: string): Promise<number> {
|
||||
const allGroups = await this.repository.findAll();
|
||||
let updatedCount = 0;
|
||||
|
||||
for (const group of allGroups) {
|
||||
let updated = false;
|
||||
const newServers = group.servers.map((server) => {
|
||||
if (typeof server === 'string') {
|
||||
if (server === oldName) {
|
||||
updated = true;
|
||||
return newName;
|
||||
}
|
||||
return server;
|
||||
} else {
|
||||
if (server.name === oldName) {
|
||||
updated = true;
|
||||
return { ...server, name: newName };
|
||||
}
|
||||
return server;
|
||||
}
|
||||
});
|
||||
|
||||
if (updated) {
|
||||
await this.update(group.id, { servers: newServers as any });
|
||||
updatedCount++;
|
||||
}
|
||||
}
|
||||
|
||||
return updatedCount;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -41,11 +41,6 @@ export interface ServerDao extends BaseDao<ServerConfigWithName, string> {
|
||||
name: string,
|
||||
prompts: Record<string, { enabled: boolean; description?: string }>,
|
||||
): Promise<boolean>;
|
||||
|
||||
/**
|
||||
* Rename a server (change its name/key)
|
||||
*/
|
||||
rename(oldName: string, newName: string): Promise<boolean>;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -100,8 +95,7 @@ export class ServerDaoImpl extends JsonFileBaseDao implements ServerDao {
|
||||
return {
|
||||
...existing,
|
||||
...updates,
|
||||
// Keep the existing name unless explicitly updating via rename
|
||||
name: updates.name ?? existing.name,
|
||||
name: existing.name, // Name should not be updated
|
||||
};
|
||||
}
|
||||
|
||||
@@ -147,7 +141,9 @@ export class ServerDaoImpl extends JsonFileBaseDao implements ServerDao {
|
||||
return null;
|
||||
}
|
||||
|
||||
const updatedServer = this.updateEntity(servers[index], updates);
|
||||
// Don't allow name changes
|
||||
const { name: _, ...allowedUpdates } = updates;
|
||||
const updatedServer = this.updateEntity(servers[index], allowedUpdates);
|
||||
servers[index] = updatedServer;
|
||||
|
||||
await this.saveAll(servers);
|
||||
@@ -211,22 +207,4 @@ export class ServerDaoImpl extends JsonFileBaseDao implements ServerDao {
|
||||
const result = await this.update(name, { prompts });
|
||||
return result !== null;
|
||||
}
|
||||
|
||||
async rename(oldName: string, newName: string): Promise<boolean> {
|
||||
const servers = await this.getAll();
|
||||
const index = servers.findIndex((server) => server.name === oldName);
|
||||
|
||||
if (index === -1) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check if newName already exists
|
||||
if (servers.find((server) => server.name === newName)) {
|
||||
throw new Error(`Server ${newName} already exists`);
|
||||
}
|
||||
|
||||
servers[index] = { ...servers[index], name: newName };
|
||||
await this.saveAll(servers);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -115,15 +115,6 @@ export class ServerDaoDbImpl implements ServerDao {
|
||||
return result !== null;
|
||||
}
|
||||
|
||||
async rename(oldName: string, newName: string): Promise<boolean> {
|
||||
// Check if newName already exists
|
||||
if (await this.repository.exists(newName)) {
|
||||
throw new Error(`Server ${newName} already exists`);
|
||||
}
|
||||
|
||||
return await this.repository.rename(oldName, newName);
|
||||
}
|
||||
|
||||
private mapToServerConfig(server: {
|
||||
name: string;
|
||||
type?: string;
|
||||
|
||||
@@ -25,7 +25,7 @@ export class BearerKey {
|
||||
enabled: boolean;
|
||||
|
||||
@Column({ type: 'varchar', length: 20, default: 'all' })
|
||||
accessType: 'all' | 'groups' | 'servers' | 'custom';
|
||||
accessType: 'all' | 'groups' | 'servers';
|
||||
|
||||
@Column({ type: 'simple-json', nullable: true })
|
||||
allowedGroups?: string[];
|
||||
|
||||
@@ -33,9 +33,6 @@ export class SystemConfig {
|
||||
@Column({ type: 'boolean', nullable: true })
|
||||
enableSessionRebuild?: boolean;
|
||||
|
||||
@Column({ type: 'simple-json', nullable: true })
|
||||
compression?: Record<string, any>;
|
||||
|
||||
@CreateDateColumn({ name: 'created_at', type: 'timestamp' })
|
||||
createdAt: Date;
|
||||
|
||||
|
||||
@@ -89,19 +89,6 @@ export class ServerRepository {
|
||||
async setEnabled(name: string, enabled: boolean): Promise<Server | null> {
|
||||
return await this.update(name, { enabled });
|
||||
}
|
||||
|
||||
/**
|
||||
* Rename a server
|
||||
*/
|
||||
async rename(oldName: string, newName: string): Promise<boolean> {
|
||||
const server = await this.findByName(oldName);
|
||||
if (!server) {
|
||||
return false;
|
||||
}
|
||||
server.name = newName;
|
||||
await this.repository.save(server);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
export default ServerRepository;
|
||||
|
||||
@@ -32,7 +32,6 @@ export class SystemConfigRepository {
|
||||
oauth: {},
|
||||
oauthServer: {},
|
||||
enableSessionRebuild: false,
|
||||
compression: {},
|
||||
});
|
||||
config = await this.repository.save(config);
|
||||
}
|
||||
|
||||
@@ -1,266 +0,0 @@
|
||||
import OpenAI from 'openai';
|
||||
import { getSmartRoutingConfig, SmartRoutingConfig } from '../utils/smartRouting.js';
|
||||
import { getSystemConfigDao } from '../dao/index.js';
|
||||
|
||||
/**
|
||||
* Compression configuration interface
|
||||
*/
|
||||
export interface CompressionConfig {
|
||||
enabled: boolean;
|
||||
model?: string;
|
||||
maxInputTokens?: number;
|
||||
targetReductionRatio?: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Default compression configuration
|
||||
*/
|
||||
const DEFAULT_COMPRESSION_CONFIG: CompressionConfig = {
|
||||
enabled: false,
|
||||
model: 'gpt-4o-mini',
|
||||
maxInputTokens: 100000,
|
||||
targetReductionRatio: 0.5,
|
||||
};
|
||||
|
||||
/**
|
||||
* Get compression configuration from system settings
|
||||
*/
|
||||
export async function getCompressionConfig(): Promise<CompressionConfig> {
|
||||
try {
|
||||
const systemConfigDao = getSystemConfigDao();
|
||||
const systemConfig = await systemConfigDao.get();
|
||||
const compressionSettings = systemConfig?.compression || {};
|
||||
|
||||
return {
|
||||
enabled: compressionSettings.enabled ?? DEFAULT_COMPRESSION_CONFIG.enabled,
|
||||
model: compressionSettings.model ?? DEFAULT_COMPRESSION_CONFIG.model,
|
||||
maxInputTokens: compressionSettings.maxInputTokens ?? DEFAULT_COMPRESSION_CONFIG.maxInputTokens,
|
||||
targetReductionRatio:
|
||||
compressionSettings.targetReductionRatio ?? DEFAULT_COMPRESSION_CONFIG.targetReductionRatio,
|
||||
};
|
||||
} catch (error) {
|
||||
console.warn('Failed to get compression config, using defaults:', error);
|
||||
return DEFAULT_COMPRESSION_CONFIG;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if compression is available and enabled
|
||||
*/
|
||||
export async function isCompressionEnabled(): Promise<boolean> {
|
||||
const config = await getCompressionConfig();
|
||||
if (!config.enabled) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check if we have OpenAI API key configured (via smart routing config)
|
||||
const smartRoutingConfig = await getSmartRoutingConfig();
|
||||
return !!smartRoutingConfig.openaiApiKey;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get OpenAI client for compression
|
||||
*/
|
||||
async function getOpenAIClient(smartRoutingConfig: SmartRoutingConfig): Promise<OpenAI | null> {
|
||||
if (!smartRoutingConfig.openaiApiKey) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return new OpenAI({
|
||||
apiKey: smartRoutingConfig.openaiApiKey,
|
||||
baseURL: smartRoutingConfig.openaiApiBaseUrl || 'https://api.openai.com/v1',
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Estimate token count for a string (rough approximation)
|
||||
* Uses ~4 characters per token as a rough estimate
|
||||
*/
|
||||
export function estimateTokenCount(text: string): number {
|
||||
return Math.ceil(text.length / 4);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if content should be compressed based on token count
|
||||
*/
|
||||
export function shouldCompress(content: string, maxInputTokens: number): boolean {
|
||||
const estimatedTokens = estimateTokenCount(content);
|
||||
// Only compress if content is larger than a reasonable threshold
|
||||
const compressionThreshold = Math.min(maxInputTokens * 0.1, 1000);
|
||||
return estimatedTokens > compressionThreshold;
|
||||
}
|
||||
|
||||
/**
|
||||
* Compress MCP tool output using AI
|
||||
*
|
||||
* @param content The MCP tool output content to compress
|
||||
* @param context Optional context about the tool that generated this output
|
||||
* @returns Compressed content or original content if compression fails/is disabled
|
||||
*/
|
||||
export async function compressOutput(
|
||||
content: string,
|
||||
context?: {
|
||||
toolName?: string;
|
||||
serverName?: string;
|
||||
},
|
||||
): Promise<{ compressed: string; originalLength: number; compressedLength: number; wasCompressed: boolean }> {
|
||||
const originalLength = content.length;
|
||||
|
||||
// Check if compression is enabled
|
||||
const compressionConfig = await getCompressionConfig();
|
||||
if (!compressionConfig.enabled) {
|
||||
return {
|
||||
compressed: content,
|
||||
originalLength,
|
||||
compressedLength: originalLength,
|
||||
wasCompressed: false,
|
||||
};
|
||||
}
|
||||
|
||||
// Check if content should be compressed
|
||||
if (!shouldCompress(content, compressionConfig.maxInputTokens || 100000)) {
|
||||
return {
|
||||
compressed: content,
|
||||
originalLength,
|
||||
compressedLength: originalLength,
|
||||
wasCompressed: false,
|
||||
};
|
||||
}
|
||||
|
||||
try {
|
||||
const smartRoutingConfig = await getSmartRoutingConfig();
|
||||
const openai = await getOpenAIClient(smartRoutingConfig);
|
||||
|
||||
if (!openai) {
|
||||
console.warn('Compression enabled but OpenAI API key not configured');
|
||||
return {
|
||||
compressed: content,
|
||||
originalLength,
|
||||
compressedLength: originalLength,
|
||||
wasCompressed: false,
|
||||
};
|
||||
}
|
||||
|
||||
const targetRatio = compressionConfig.targetReductionRatio || 0.5;
|
||||
const toolContext = context?.toolName ? `from tool "${context.toolName}"` : '';
|
||||
const serverContext = context?.serverName ? `on server "${context.serverName}"` : '';
|
||||
|
||||
const systemPrompt = `You are a data compression assistant. Your task is to compress MCP (Model Context Protocol) tool outputs while preserving all essential information.
|
||||
|
||||
Guidelines:
|
||||
- Remove redundant information, formatting, and verbose descriptions
|
||||
- Preserve all data values, identifiers, and critical information
|
||||
- Keep error messages and status information intact
|
||||
- Maintain structured data (JSON, arrays) in a compact but readable format
|
||||
- Target approximately ${Math.round(targetRatio * 100)}% reduction in size
|
||||
- If the content cannot be meaningfully compressed, return it as-is
|
||||
|
||||
The output is ${toolContext} ${serverContext}.`;
|
||||
|
||||
const userPrompt = `Compress the following MCP tool output while preserving all essential information:
|
||||
|
||||
${content}`;
|
||||
|
||||
const response = await openai.chat.completions.create({
|
||||
model: compressionConfig.model || 'gpt-4o-mini',
|
||||
messages: [
|
||||
{ role: 'system', content: systemPrompt },
|
||||
{ role: 'user', content: userPrompt },
|
||||
],
|
||||
temperature: 0.1,
|
||||
max_tokens: Math.ceil(estimateTokenCount(content) * targetRatio * 1.5),
|
||||
});
|
||||
|
||||
const compressedContent = response.choices[0]?.message?.content;
|
||||
|
||||
if (!compressedContent) {
|
||||
console.warn('Compression returned empty result, using original content');
|
||||
return {
|
||||
compressed: content,
|
||||
originalLength,
|
||||
compressedLength: originalLength,
|
||||
wasCompressed: false,
|
||||
};
|
||||
}
|
||||
|
||||
const compressedLength = compressedContent.length;
|
||||
|
||||
// Only use compressed version if it's actually smaller
|
||||
if (compressedLength >= originalLength) {
|
||||
console.log('Compression did not reduce size, using original content');
|
||||
return {
|
||||
compressed: content,
|
||||
originalLength,
|
||||
compressedLength: originalLength,
|
||||
wasCompressed: false,
|
||||
};
|
||||
}
|
||||
|
||||
const reductionPercent = (((originalLength - compressedLength) / originalLength) * 100).toFixed(1);
|
||||
console.log(`Compressed output: ${originalLength} -> ${compressedLength} chars (${reductionPercent}% reduction)`);
|
||||
|
||||
return {
|
||||
compressed: compressedContent,
|
||||
originalLength,
|
||||
compressedLength,
|
||||
wasCompressed: true,
|
||||
};
|
||||
} catch (error) {
|
||||
console.error('Compression failed, using original content:', error);
|
||||
return {
|
||||
compressed: content,
|
||||
originalLength,
|
||||
compressedLength: originalLength,
|
||||
wasCompressed: false,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Compress tool call result content
|
||||
* This handles the MCP tool result format with content array
|
||||
*/
|
||||
export async function compressToolResult(
|
||||
result: any,
|
||||
context?: {
|
||||
toolName?: string;
|
||||
serverName?: string;
|
||||
},
|
||||
): Promise<any> {
|
||||
// Check if compression is enabled first
|
||||
const compressionEnabled = await isCompressionEnabled();
|
||||
if (!compressionEnabled) {
|
||||
return result;
|
||||
}
|
||||
|
||||
// Handle error results - don't compress error messages
|
||||
if (result?.isError) {
|
||||
return result;
|
||||
}
|
||||
|
||||
// Handle content array format
|
||||
if (!result?.content || !Array.isArray(result.content)) {
|
||||
return result;
|
||||
}
|
||||
|
||||
const compressedContent = await Promise.all(
|
||||
result.content.map(async (item: any) => {
|
||||
// Only compress text content
|
||||
if (item?.type !== 'text' || !item?.text) {
|
||||
return item;
|
||||
}
|
||||
|
||||
const compressionResult = await compressOutput(item.text, context);
|
||||
|
||||
return {
|
||||
...item,
|
||||
text: compressionResult.compressed,
|
||||
};
|
||||
}),
|
||||
);
|
||||
|
||||
return {
|
||||
...result,
|
||||
content: compressedContent,
|
||||
};
|
||||
}
|
||||
@@ -27,7 +27,6 @@ import { getDataService } from './services.js';
|
||||
import { getServerDao, getSystemConfigDao, ServerConfigWithName } from '../dao/index.js';
|
||||
import { initializeAllOAuthClients } from './oauthService.js';
|
||||
import { createOAuthProvider } from './mcpOAuthProvider.js';
|
||||
import { compressToolResult } from './compressionService.js';
|
||||
|
||||
const servers: { [sessionId: string]: Server } = {};
|
||||
|
||||
@@ -1261,7 +1260,7 @@ export const handleCallToolRequest = async (request: any, extra: any) => {
|
||||
const result = await openApiClient.callTool(cleanToolName, finalArgs, passthroughHeaders);
|
||||
|
||||
console.log(`OpenAPI tool invocation result: ${JSON.stringify(result)}`);
|
||||
const openApiResult = {
|
||||
return {
|
||||
content: [
|
||||
{
|
||||
type: 'text',
|
||||
@@ -1269,10 +1268,6 @@ export const handleCallToolRequest = async (request: any, extra: any) => {
|
||||
},
|
||||
],
|
||||
};
|
||||
return compressToolResult(openApiResult, {
|
||||
toolName: cleanToolName,
|
||||
serverName: targetServerInfo.name,
|
||||
});
|
||||
}
|
||||
|
||||
// Call the tool on the target server (MCP servers)
|
||||
@@ -1302,10 +1297,7 @@ export const handleCallToolRequest = async (request: any, extra: any) => {
|
||||
);
|
||||
|
||||
console.log(`Tool invocation result: ${JSON.stringify(result)}`);
|
||||
return compressToolResult(result, {
|
||||
toolName,
|
||||
serverName: targetServerInfo.name,
|
||||
});
|
||||
return result;
|
||||
}
|
||||
|
||||
// Regular tool handling
|
||||
@@ -1364,7 +1356,7 @@ export const handleCallToolRequest = async (request: any, extra: any) => {
|
||||
);
|
||||
|
||||
console.log(`OpenAPI tool invocation result: ${JSON.stringify(result)}`);
|
||||
const openApiResult = {
|
||||
return {
|
||||
content: [
|
||||
{
|
||||
type: 'text',
|
||||
@@ -1372,10 +1364,6 @@ export const handleCallToolRequest = async (request: any, extra: any) => {
|
||||
},
|
||||
],
|
||||
};
|
||||
return compressToolResult(openApiResult, {
|
||||
toolName: cleanToolName,
|
||||
serverName: serverInfo.name,
|
||||
});
|
||||
}
|
||||
|
||||
// Handle MCP servers
|
||||
@@ -1386,7 +1374,6 @@ export const handleCallToolRequest = async (request: any, extra: any) => {
|
||||
|
||||
const separator = getNameSeparator();
|
||||
const prefix = `${serverInfo.name}${separator}`;
|
||||
const originalToolName = request.params.name;
|
||||
request.params.name = request.params.name.startsWith(prefix)
|
||||
? request.params.name.substring(prefix.length)
|
||||
: request.params.name;
|
||||
@@ -1396,10 +1383,7 @@ export const handleCallToolRequest = async (request: any, extra: any) => {
|
||||
serverInfo.options || {},
|
||||
);
|
||||
console.log(`Tool call result: ${JSON.stringify(result)}`);
|
||||
return compressToolResult(result, {
|
||||
toolName: originalToolName,
|
||||
serverName: serverInfo.name,
|
||||
});
|
||||
return result;
|
||||
} catch (error) {
|
||||
console.error(`Error handling CallToolRequest: ${error}`);
|
||||
return {
|
||||
|
||||
@@ -633,4 +633,274 @@ describe('sseService', () => {
|
||||
expectBearerUnauthorized(res, 'No authorization provided');
|
||||
});
|
||||
});
|
||||
|
||||
describe('stream parameter support', () => {
|
||||
beforeEach(() => {
|
||||
// Clear transports before each test
|
||||
Object.keys(transports).forEach((key) => delete transports[key]);
|
||||
});
|
||||
|
||||
it('should create transport with enableJsonResponse=true when stream=false in body', async () => {
|
||||
const req = createMockRequest({
|
||||
params: { group: 'test-group' },
|
||||
body: {
|
||||
method: 'initialize',
|
||||
stream: false,
|
||||
},
|
||||
});
|
||||
const res = createMockResponse();
|
||||
|
||||
await handleMcpPostRequest(req, res);
|
||||
|
||||
// Check that StreamableHTTPServerTransport was called with enableJsonResponse: true
|
||||
expect(StreamableHTTPServerTransport).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enableJsonResponse: true,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should create transport with enableJsonResponse=false when stream=true in body', async () => {
|
||||
const req = createMockRequest({
|
||||
params: { group: 'test-group' },
|
||||
body: {
|
||||
method: 'initialize',
|
||||
stream: true,
|
||||
},
|
||||
});
|
||||
const res = createMockResponse();
|
||||
|
||||
await handleMcpPostRequest(req, res);
|
||||
|
||||
// Check that StreamableHTTPServerTransport was called with enableJsonResponse: false
|
||||
expect(StreamableHTTPServerTransport).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enableJsonResponse: false,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should create transport with enableJsonResponse=true when stream=false in query', async () => {
|
||||
const req = createMockRequest({
|
||||
params: { group: 'test-group' },
|
||||
query: { stream: 'false' },
|
||||
body: {
|
||||
method: 'initialize',
|
||||
},
|
||||
});
|
||||
const res = createMockResponse();
|
||||
|
||||
await handleMcpPostRequest(req, res);
|
||||
|
||||
// Check that StreamableHTTPServerTransport was called with enableJsonResponse: true
|
||||
expect(StreamableHTTPServerTransport).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enableJsonResponse: true,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should default to enableJsonResponse=false when stream parameter not provided', async () => {
|
||||
const req = createMockRequest({
|
||||
params: { group: 'test-group' },
|
||||
body: {
|
||||
method: 'initialize',
|
||||
},
|
||||
});
|
||||
const res = createMockResponse();
|
||||
|
||||
await handleMcpPostRequest(req, res);
|
||||
|
||||
// Check that StreamableHTTPServerTransport was called with enableJsonResponse: false (default)
|
||||
expect(StreamableHTTPServerTransport).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enableJsonResponse: false,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should prioritize body stream parameter over query parameter', async () => {
|
||||
const req = createMockRequest({
|
||||
params: { group: 'test-group' },
|
||||
query: { stream: 'true' },
|
||||
body: {
|
||||
method: 'initialize',
|
||||
stream: false, // body should take priority
|
||||
},
|
||||
});
|
||||
const res = createMockResponse();
|
||||
|
||||
await handleMcpPostRequest(req, res);
|
||||
|
||||
// Check that StreamableHTTPServerTransport was called with enableJsonResponse: true (from body)
|
||||
expect(StreamableHTTPServerTransport).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enableJsonResponse: true,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should pass enableJsonResponse to createSessionWithId when rebuilding session', async () => {
|
||||
setMockSystemConfig({
|
||||
routing: {
|
||||
enableGlobalRoute: true,
|
||||
enableGroupNameRoute: true,
|
||||
enableBearerAuth: false,
|
||||
bearerAuthKey: 'test-key',
|
||||
},
|
||||
enableSessionRebuild: true,
|
||||
});
|
||||
|
||||
const req = createMockRequest({
|
||||
params: { group: 'test-group' },
|
||||
headers: { 'mcp-session-id': 'invalid-session' },
|
||||
body: {
|
||||
method: 'someMethod',
|
||||
stream: false,
|
||||
},
|
||||
});
|
||||
const res = createMockResponse();
|
||||
|
||||
await handleMcpPostRequest(req, res);
|
||||
|
||||
// Check that StreamableHTTPServerTransport was called with enableJsonResponse: true
|
||||
expect(StreamableHTTPServerTransport).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enableJsonResponse: true,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle string "false" in query parameter', async () => {
|
||||
const req = createMockRequest({
|
||||
params: { group: 'test-group' },
|
||||
query: { stream: 'false' },
|
||||
body: {
|
||||
method: 'initialize',
|
||||
},
|
||||
});
|
||||
const res = createMockResponse();
|
||||
|
||||
await handleMcpPostRequest(req, res);
|
||||
|
||||
expect(StreamableHTTPServerTransport).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enableJsonResponse: true,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle string "0" in query parameter', async () => {
|
||||
const req = createMockRequest({
|
||||
params: { group: 'test-group' },
|
||||
query: { stream: '0' },
|
||||
body: {
|
||||
method: 'initialize',
|
||||
},
|
||||
});
|
||||
const res = createMockResponse();
|
||||
|
||||
await handleMcpPostRequest(req, res);
|
||||
|
||||
expect(StreamableHTTPServerTransport).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enableJsonResponse: true,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle number 0 in body parameter', async () => {
|
||||
const req = createMockRequest({
|
||||
params: { group: 'test-group' },
|
||||
body: {
|
||||
method: 'initialize',
|
||||
stream: 0,
|
||||
},
|
||||
});
|
||||
const res = createMockResponse();
|
||||
|
||||
await handleMcpPostRequest(req, res);
|
||||
|
||||
expect(StreamableHTTPServerTransport).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enableJsonResponse: true,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle number 1 in body parameter', async () => {
|
||||
const req = createMockRequest({
|
||||
params: { group: 'test-group' },
|
||||
body: {
|
||||
method: 'initialize',
|
||||
stream: 1,
|
||||
},
|
||||
});
|
||||
const res = createMockResponse();
|
||||
|
||||
await handleMcpPostRequest(req, res);
|
||||
|
||||
expect(StreamableHTTPServerTransport).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enableJsonResponse: false,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle "yes" and "no" string values', async () => {
|
||||
// Test "yes"
|
||||
const reqYes = createMockRequest({
|
||||
params: { group: 'test-group' },
|
||||
query: { stream: 'yes' },
|
||||
body: { method: 'initialize' },
|
||||
});
|
||||
const resYes = createMockResponse();
|
||||
|
||||
await handleMcpPostRequest(reqYes, resYes);
|
||||
|
||||
expect(StreamableHTTPServerTransport).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enableJsonResponse: false,
|
||||
}),
|
||||
);
|
||||
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Test "no"
|
||||
const reqNo = createMockRequest({
|
||||
params: { group: 'test-group' },
|
||||
query: { stream: 'no' },
|
||||
body: { method: 'initialize' },
|
||||
});
|
||||
const resNo = createMockResponse();
|
||||
|
||||
await handleMcpPostRequest(reqNo, resNo);
|
||||
|
||||
expect(StreamableHTTPServerTransport).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enableJsonResponse: true,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should default to streaming for invalid/unknown values', async () => {
|
||||
const req = createMockRequest({
|
||||
params: { group: 'test-group' },
|
||||
query: { stream: 'invalid-value' },
|
||||
body: {
|
||||
method: 'initialize',
|
||||
},
|
||||
});
|
||||
const res = createMockResponse();
|
||||
|
||||
await handleMcpPostRequest(req, res);
|
||||
|
||||
// Should default to streaming (enableJsonResponse: false)
|
||||
expect(StreamableHTTPServerTransport).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enableJsonResponse: false,
|
||||
}),
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -88,29 +88,6 @@ const isBearerKeyAllowedForRequest = async (req: Request, key: BearerKey): Promi
|
||||
return groupServerNames.some((name) => allowedServers.includes(name));
|
||||
}
|
||||
|
||||
if (key.accessType === 'custom') {
|
||||
// For custom-scoped keys, check if the group is allowed OR if any server in the group is allowed
|
||||
const allowedGroups = key.allowedGroups || [];
|
||||
const allowedServers = key.allowedServers || [];
|
||||
|
||||
// Check if the group itself is allowed
|
||||
const groupAllowed =
|
||||
allowedGroups.includes(matchedGroup.name) || allowedGroups.includes(matchedGroup.id);
|
||||
if (groupAllowed) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// Check if any server in the group is allowed
|
||||
if (allowedServers.length > 0 && Array.isArray(matchedGroup.servers)) {
|
||||
const groupServerNames = matchedGroup.servers.map((server) =>
|
||||
typeof server === 'string' ? server : server.name,
|
||||
);
|
||||
return groupServerNames.some((name) => allowedServers.includes(name));
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
// Unknown accessType with matched group
|
||||
return false;
|
||||
}
|
||||
@@ -125,8 +102,8 @@ const isBearerKeyAllowedForRequest = async (req: Request, key: BearerKey): Promi
|
||||
return false;
|
||||
}
|
||||
|
||||
if (key.accessType === 'servers' || key.accessType === 'custom') {
|
||||
// For server-scoped or custom-scoped keys, check if the server is in allowedServers
|
||||
if (key.accessType === 'servers') {
|
||||
// For server-scoped keys, check if the server is in allowedServers
|
||||
const allowedServers = key.allowedServers || [];
|
||||
return allowedServers.includes(matchedServer.name);
|
||||
}
|
||||
@@ -431,9 +408,10 @@ async function createSessionWithId(
|
||||
sessionId: string,
|
||||
group: string,
|
||||
username?: string,
|
||||
enableJsonResponse?: boolean,
|
||||
): Promise<StreamableHTTPServerTransport> {
|
||||
console.log(
|
||||
`[SESSION REBUILD] Starting session rebuild for ID: ${sessionId}${username ? ` for user: ${username}` : ''}`,
|
||||
`[SESSION REBUILD] Starting session rebuild for ID: ${sessionId}${username ? ` for user: ${username}` : ''} with enableJsonResponse: ${enableJsonResponse}`,
|
||||
);
|
||||
|
||||
// Create a new server instance to ensure clean state
|
||||
@@ -441,6 +419,7 @@ async function createSessionWithId(
|
||||
|
||||
const transport = new StreamableHTTPServerTransport({
|
||||
sessionIdGenerator: () => sessionId, // Use the specified sessionId
|
||||
enableJsonResponse: enableJsonResponse ?? false,
|
||||
onsessioninitialized: (initializedSessionId) => {
|
||||
console.log(
|
||||
`[SESSION REBUILD] onsessioninitialized triggered for ID: ${initializedSessionId}`,
|
||||
@@ -492,14 +471,16 @@ async function createSessionWithId(
|
||||
async function createNewSession(
|
||||
group: string,
|
||||
username?: string,
|
||||
enableJsonResponse?: boolean,
|
||||
): Promise<StreamableHTTPServerTransport> {
|
||||
const newSessionId = randomUUID();
|
||||
console.log(
|
||||
`[SESSION NEW] Creating new session with ID: ${newSessionId}${username ? ` for user: ${username}` : ''}`,
|
||||
`[SESSION NEW] Creating new session with ID: ${newSessionId}${username ? ` for user: ${username}` : ''} with enableJsonResponse: ${enableJsonResponse}`,
|
||||
);
|
||||
|
||||
const transport = new StreamableHTTPServerTransport({
|
||||
sessionIdGenerator: () => newSessionId,
|
||||
enableJsonResponse: enableJsonResponse ?? false,
|
||||
onsessioninitialized: (sessionId) => {
|
||||
transports[sessionId] = { transport, group };
|
||||
console.log(
|
||||
@@ -538,8 +519,48 @@ export const handleMcpPostRequest = async (req: Request, res: Response): Promise
|
||||
const sessionId = req.headers['mcp-session-id'] as string | undefined;
|
||||
const group = req.params.group;
|
||||
const body = req.body;
|
||||
|
||||
// Parse stream parameter from query string or request body
|
||||
// Default to true (SSE streaming) for backward compatibility
|
||||
let enableStreaming = true;
|
||||
|
||||
// Helper function to parse stream parameter value
|
||||
const parseStreamParam = (value: any): boolean => {
|
||||
if (typeof value === 'boolean') {
|
||||
return value;
|
||||
}
|
||||
if (typeof value === 'string') {
|
||||
const lowerValue = value.toLowerCase().trim();
|
||||
// Accept 'true', '1', 'yes', 'on' as truthy
|
||||
if (['true', '1', 'yes', 'on'].includes(lowerValue)) {
|
||||
return true;
|
||||
}
|
||||
// Accept 'false', '0', 'no', 'off' as falsy
|
||||
if (['false', '0', 'no', 'off'].includes(lowerValue)) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
if (typeof value === 'number') {
|
||||
return value !== 0;
|
||||
}
|
||||
// Default to true for any other value (including undefined)
|
||||
return true;
|
||||
};
|
||||
|
||||
// Check query parameter first
|
||||
if (req.query.stream !== undefined) {
|
||||
enableStreaming = parseStreamParam(req.query.stream);
|
||||
}
|
||||
// Then check request body (has higher priority)
|
||||
if (body && typeof body === 'object' && 'stream' in body) {
|
||||
enableStreaming = parseStreamParam(body.stream);
|
||||
}
|
||||
|
||||
// enableJsonResponse is the inverse of enableStreaming
|
||||
const enableJsonResponse = !enableStreaming;
|
||||
|
||||
console.log(
|
||||
`Handling MCP post request for sessionId: ${sessionId} and group: ${group}${username ? ` for user: ${username}` : ''} with body: ${JSON.stringify(body)}`,
|
||||
`Handling MCP post request for sessionId: ${sessionId} and group: ${group}${username ? ` for user: ${username}` : ''} with enableStreaming: ${enableStreaming}`,
|
||||
);
|
||||
|
||||
// Get filtered settings based on user context (after setting user context)
|
||||
@@ -582,7 +603,7 @@ export const handleMcpPostRequest = async (req: Request, res: Response): Promise
|
||||
);
|
||||
transport = await sessionCreationLocks[sessionId];
|
||||
} else {
|
||||
sessionCreationLocks[sessionId] = createSessionWithId(sessionId, group, username);
|
||||
sessionCreationLocks[sessionId] = createSessionWithId(sessionId, group, username, enableJsonResponse);
|
||||
try {
|
||||
transport = await sessionCreationLocks[sessionId];
|
||||
console.log(
|
||||
@@ -619,7 +640,7 @@ export const handleMcpPostRequest = async (req: Request, res: Response): Promise
|
||||
console.log(
|
||||
`[SESSION CREATE] No session ID provided for initialize request, creating new session${username ? ` for user: ${username}` : ''}`,
|
||||
);
|
||||
transport = await createNewSession(group, username);
|
||||
transport = await createNewSession(group, username, enableJsonResponse);
|
||||
} else {
|
||||
// Case 4: No sessionId and not an initialize request, return error
|
||||
console.warn(
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { getRepositoryFactory } from '../db/index.js';
|
||||
import { VectorEmbeddingRepository } from '../db/repositories/index.js';
|
||||
import { Tool } from '../types/index.js';
|
||||
import { getAppDataSource, isDatabaseConnected, initializeDatabase } from '../db/connection.js';
|
||||
import { getAppDataSource, initializeDatabase } from '../db/connection.js';
|
||||
import { getSmartRoutingConfig } from '../utils/smartRouting.js';
|
||||
import OpenAI from 'openai';
|
||||
|
||||
@@ -197,12 +197,6 @@ export const saveToolsAsVectorEmbeddings = async (
|
||||
return;
|
||||
}
|
||||
|
||||
// Ensure database is initialized before using repository
|
||||
if (!isDatabaseConnected()) {
|
||||
console.info('Database not initialized, initializing...');
|
||||
await initializeDatabase();
|
||||
}
|
||||
|
||||
const config = await getOpenAIConfig();
|
||||
const vectorRepository = getRepositoryFactory(
|
||||
'vectorEmbeddings',
|
||||
@@ -251,7 +245,7 @@ export const saveToolsAsVectorEmbeddings = async (
|
||||
|
||||
console.log(`Saved ${tools.length} tool embeddings for server: ${serverName}`);
|
||||
} catch (error) {
|
||||
console.error(`Error saving tool embeddings for server ${serverName}:${error}`);
|
||||
console.error(`Error saving tool embeddings for server ${serverName}:`, error);
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
@@ -173,12 +173,6 @@ export interface SystemConfig {
|
||||
oauth?: OAuthProviderConfig; // OAuth provider configuration for upstream MCP servers
|
||||
oauthServer?: OAuthServerConfig; // OAuth authorization server configuration for MCPHub itself
|
||||
enableSessionRebuild?: boolean; // Controls whether server session rebuild is enabled
|
||||
compression?: {
|
||||
enabled?: boolean; // Enable/disable AI compression of MCP tool outputs
|
||||
model?: string; // AI model to use for compression (default: 'gpt-4o-mini')
|
||||
maxInputTokens?: number; // Maximum input tokens for compression (default: 100000)
|
||||
targetReductionRatio?: number; // Target reduction ratio, 0.0-1.0 (default: 0.5)
|
||||
};
|
||||
}
|
||||
|
||||
export interface UserConfig {
|
||||
@@ -250,7 +244,7 @@ export interface OAuthServerConfig {
|
||||
}
|
||||
|
||||
// Bearer authentication key configuration
|
||||
export type BearerKeyAccessType = 'all' | 'groups' | 'servers' | 'custom';
|
||||
export type BearerKeyAccessType = 'all' | 'groups' | 'servers';
|
||||
|
||||
export interface BearerKey {
|
||||
id: string; // Unique identifier for the key
|
||||
@@ -258,8 +252,8 @@ export interface BearerKey {
|
||||
token: string; // Bearer token value
|
||||
enabled: boolean; // Whether this key is enabled
|
||||
accessType: BearerKeyAccessType; // Access scope type
|
||||
allowedGroups?: string[]; // Allowed group names when accessType === 'groups' or 'custom'
|
||||
allowedServers?: string[]; // Allowed server names when accessType === 'servers' or 'custom'
|
||||
allowedGroups?: string[]; // Allowed group names when accessType === 'groups'
|
||||
allowedServers?: string[]; // Allowed server names when accessType === 'servers'
|
||||
}
|
||||
|
||||
// Represents the settings for MCP servers
|
||||
|
||||
@@ -117,7 +117,6 @@ export async function migrateToDatabase(): Promise<boolean> {
|
||||
oauth: settings.systemConfig.oauth || {},
|
||||
oauthServer: settings.systemConfig.oauthServer || {},
|
||||
enableSessionRebuild: settings.systemConfig.enableSessionRebuild,
|
||||
compression: settings.systemConfig.compression || {},
|
||||
};
|
||||
await systemConfigRepo.update(systemConfig);
|
||||
console.log(' - System configuration updated');
|
||||
|
||||
152
tests/integration/stream-parameter.test.ts
Normal file
152
tests/integration/stream-parameter.test.ts
Normal file
@@ -0,0 +1,152 @@
|
||||
/**
|
||||
* Integration test for stream parameter support
|
||||
* This test demonstrates the usage of stream parameter in MCP requests
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from '@jest/globals';
|
||||
|
||||
describe('Stream Parameter Integration Test', () => {
|
||||
it('should demonstrate stream parameter usage', () => {
|
||||
// Example 1: Using stream=false in query parameter
|
||||
const queryExample = {
|
||||
url: '/mcp?stream=false',
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
Accept: 'application/json, text/event-stream',
|
||||
},
|
||||
body: {
|
||||
method: 'initialize',
|
||||
params: {
|
||||
protocolVersion: '2025-03-26',
|
||||
capabilities: {},
|
||||
clientInfo: {
|
||||
name: 'TestClient',
|
||||
version: '1.0.0',
|
||||
},
|
||||
},
|
||||
jsonrpc: '2.0',
|
||||
id: 1,
|
||||
},
|
||||
};
|
||||
|
||||
expect(queryExample.url).toContain('stream=false');
|
||||
|
||||
// Example 2: Using stream parameter in request body
|
||||
const bodyExample = {
|
||||
url: '/mcp',
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
Accept: 'application/json, text/event-stream',
|
||||
},
|
||||
body: {
|
||||
method: 'initialize',
|
||||
stream: false, // Body parameter
|
||||
params: {
|
||||
protocolVersion: '2025-03-26',
|
||||
capabilities: {},
|
||||
clientInfo: {
|
||||
name: 'TestClient',
|
||||
version: '1.0.0',
|
||||
},
|
||||
},
|
||||
jsonrpc: '2.0',
|
||||
id: 1,
|
||||
},
|
||||
};
|
||||
|
||||
expect(bodyExample.body.stream).toBe(false);
|
||||
|
||||
// Example 3: Default behavior (streaming enabled)
|
||||
const defaultExample = {
|
||||
url: '/mcp',
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
Accept: 'application/json, text/event-stream',
|
||||
},
|
||||
body: {
|
||||
method: 'initialize',
|
||||
params: {
|
||||
protocolVersion: '2025-03-26',
|
||||
capabilities: {},
|
||||
clientInfo: {
|
||||
name: 'TestClient',
|
||||
version: '1.0.0',
|
||||
},
|
||||
},
|
||||
jsonrpc: '2.0',
|
||||
id: 1,
|
||||
},
|
||||
};
|
||||
|
||||
expect(defaultExample.body).not.toHaveProperty('stream');
|
||||
});
|
||||
|
||||
it('should show expected response formats', () => {
|
||||
// Expected response format for stream=false (JSON)
|
||||
const jsonResponse = {
|
||||
jsonrpc: '2.0',
|
||||
result: {
|
||||
protocolVersion: '2025-03-26',
|
||||
capabilities: {
|
||||
tools: {},
|
||||
prompts: {},
|
||||
},
|
||||
serverInfo: {
|
||||
name: 'MCPHub',
|
||||
version: '1.0.0',
|
||||
},
|
||||
},
|
||||
id: 1,
|
||||
};
|
||||
|
||||
expect(jsonResponse).toHaveProperty('jsonrpc');
|
||||
expect(jsonResponse).toHaveProperty('result');
|
||||
|
||||
// Expected response format for stream=true (SSE)
|
||||
const sseResponse = {
|
||||
headers: {
|
||||
'Content-Type': 'text/event-stream',
|
||||
'mcp-session-id': '550e8400-e29b-41d4-a716-446655440000',
|
||||
},
|
||||
body: 'data: {"jsonrpc":"2.0","result":{...},"id":1}\n\n',
|
||||
};
|
||||
|
||||
expect(sseResponse.headers['Content-Type']).toBe('text/event-stream');
|
||||
expect(sseResponse.headers).toHaveProperty('mcp-session-id');
|
||||
});
|
||||
|
||||
it('should demonstrate all route variants', () => {
|
||||
const routes = [
|
||||
{ route: '/mcp?stream=false', description: 'Global route with non-streaming' },
|
||||
{ route: '/mcp/mygroup?stream=false', description: 'Group route with non-streaming' },
|
||||
{ route: '/mcp/myserver?stream=false', description: 'Server route with non-streaming' },
|
||||
{ route: '/mcp/$smart?stream=false', description: 'Smart routing with non-streaming' },
|
||||
];
|
||||
|
||||
routes.forEach((item) => {
|
||||
expect(item.route).toContain('stream=false');
|
||||
expect(item.description).toBeTruthy();
|
||||
});
|
||||
});
|
||||
|
||||
it('should show parameter priority', () => {
|
||||
// Body parameter takes priority over query parameter
|
||||
const mixedExample = {
|
||||
url: '/mcp?stream=true', // Query says stream=true
|
||||
body: {
|
||||
method: 'initialize',
|
||||
stream: false, // Body says stream=false - this takes priority
|
||||
params: {},
|
||||
jsonrpc: '2.0',
|
||||
id: 1,
|
||||
},
|
||||
};
|
||||
|
||||
// In this case, the effective value should be false (from body)
|
||||
expect(mixedExample.body.stream).toBe(false);
|
||||
expect(mixedExample.url).toContain('stream=true');
|
||||
});
|
||||
});
|
||||
@@ -1,428 +0,0 @@
|
||||
// Mock the DAO module before imports
|
||||
jest.mock('../../src/dao/index.js', () => ({
|
||||
getSystemConfigDao: jest.fn(),
|
||||
}));
|
||||
|
||||
// Mock smart routing config
|
||||
jest.mock('../../src/utils/smartRouting.js', () => ({
|
||||
getSmartRoutingConfig: jest.fn(),
|
||||
}));
|
||||
|
||||
// Mock OpenAI
|
||||
jest.mock('openai', () => {
|
||||
return {
|
||||
__esModule: true,
|
||||
default: jest.fn().mockImplementation(() => ({
|
||||
chat: {
|
||||
completions: {
|
||||
create: jest.fn(),
|
||||
},
|
||||
},
|
||||
})),
|
||||
};
|
||||
});
|
||||
|
||||
import {
|
||||
getCompressionConfig,
|
||||
isCompressionEnabled,
|
||||
estimateTokenCount,
|
||||
shouldCompress,
|
||||
compressOutput,
|
||||
compressToolResult,
|
||||
} from '../../src/services/compressionService.js';
|
||||
import { getSystemConfigDao } from '../../src/dao/index.js';
|
||||
import { getSmartRoutingConfig } from '../../src/utils/smartRouting.js';
|
||||
import OpenAI from 'openai';
|
||||
|
||||
describe('CompressionService', () => {
|
||||
const mockSystemConfigDao = {
|
||||
get: jest.fn(),
|
||||
getSection: jest.fn(),
|
||||
update: jest.fn(),
|
||||
updateSection: jest.fn(),
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
(getSystemConfigDao as jest.Mock).mockReturnValue(mockSystemConfigDao);
|
||||
});
|
||||
|
||||
describe('getCompressionConfig', () => {
|
||||
it('should return default config when no config is set', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({});
|
||||
|
||||
const config = await getCompressionConfig();
|
||||
|
||||
expect(config).toEqual({
|
||||
enabled: false,
|
||||
model: 'gpt-4o-mini',
|
||||
maxInputTokens: 100000,
|
||||
targetReductionRatio: 0.5,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return configured values when set', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: {
|
||||
enabled: true,
|
||||
model: 'gpt-4o',
|
||||
maxInputTokens: 50000,
|
||||
targetReductionRatio: 0.3,
|
||||
},
|
||||
});
|
||||
|
||||
const config = await getCompressionConfig();
|
||||
|
||||
expect(config).toEqual({
|
||||
enabled: true,
|
||||
model: 'gpt-4o',
|
||||
maxInputTokens: 50000,
|
||||
targetReductionRatio: 0.3,
|
||||
});
|
||||
});
|
||||
|
||||
it('should use defaults for missing values', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: {
|
||||
enabled: true,
|
||||
},
|
||||
});
|
||||
|
||||
const config = await getCompressionConfig();
|
||||
|
||||
expect(config).toEqual({
|
||||
enabled: true,
|
||||
model: 'gpt-4o-mini',
|
||||
maxInputTokens: 100000,
|
||||
targetReductionRatio: 0.5,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return defaults on error', async () => {
|
||||
mockSystemConfigDao.get.mockRejectedValue(new Error('Test error'));
|
||||
|
||||
const config = await getCompressionConfig();
|
||||
|
||||
expect(config).toEqual({
|
||||
enabled: false,
|
||||
model: 'gpt-4o-mini',
|
||||
maxInputTokens: 100000,
|
||||
targetReductionRatio: 0.5,
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('isCompressionEnabled', () => {
|
||||
it('should return false when compression is disabled', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: { enabled: false },
|
||||
});
|
||||
|
||||
const enabled = await isCompressionEnabled();
|
||||
|
||||
expect(enabled).toBe(false);
|
||||
});
|
||||
|
||||
it('should return false when enabled but no API key', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: { enabled: true },
|
||||
});
|
||||
(getSmartRoutingConfig as jest.Mock).mockResolvedValue({
|
||||
openaiApiKey: '',
|
||||
});
|
||||
|
||||
const enabled = await isCompressionEnabled();
|
||||
|
||||
expect(enabled).toBe(false);
|
||||
});
|
||||
|
||||
it('should return true when enabled and API key is set', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: { enabled: true },
|
||||
});
|
||||
(getSmartRoutingConfig as jest.Mock).mockResolvedValue({
|
||||
openaiApiKey: 'test-api-key',
|
||||
});
|
||||
|
||||
const enabled = await isCompressionEnabled();
|
||||
|
||||
expect(enabled).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('estimateTokenCount', () => {
|
||||
it('should estimate tokens for short text', () => {
|
||||
const text = 'Hello world';
|
||||
const tokens = estimateTokenCount(text);
|
||||
|
||||
// Estimate based on ~4 chars per token
|
||||
expect(tokens).toBe(Math.ceil(text.length / 4));
|
||||
});
|
||||
|
||||
it('should estimate tokens for longer text', () => {
|
||||
const text = 'This is a longer piece of text that should have more tokens';
|
||||
const tokens = estimateTokenCount(text);
|
||||
|
||||
// Estimate based on ~4 chars per token
|
||||
expect(tokens).toBe(Math.ceil(text.length / 4));
|
||||
});
|
||||
|
||||
it('should handle empty string', () => {
|
||||
const tokens = estimateTokenCount('');
|
||||
|
||||
expect(tokens).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('shouldCompress', () => {
|
||||
it('should return false for small content', () => {
|
||||
const content = 'Small content';
|
||||
const result = shouldCompress(content, 100000);
|
||||
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
|
||||
it('should return true for large content', () => {
|
||||
// Create content larger than the threshold
|
||||
const content = 'x'.repeat(5000);
|
||||
const result = shouldCompress(content, 100000);
|
||||
|
||||
expect(result).toBe(true);
|
||||
});
|
||||
|
||||
it('should use 10% of maxInputTokens as threshold', () => {
|
||||
// Test threshold behavior with different content sizes
|
||||
const smallContent = 'x'.repeat(300);
|
||||
const largeContent = 'x'.repeat(500);
|
||||
|
||||
expect(shouldCompress(smallContent, 1000)).toBe(false);
|
||||
expect(shouldCompress(largeContent, 1000)).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('compressOutput', () => {
|
||||
it('should return original content when compression is disabled', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: { enabled: false },
|
||||
});
|
||||
|
||||
const content = 'Test content';
|
||||
const result = await compressOutput(content);
|
||||
|
||||
expect(result).toEqual({
|
||||
compressed: content,
|
||||
originalLength: content.length,
|
||||
compressedLength: content.length,
|
||||
wasCompressed: false,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return original content when content is too small', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: { enabled: true, maxInputTokens: 100000 },
|
||||
});
|
||||
(getSmartRoutingConfig as jest.Mock).mockResolvedValue({
|
||||
openaiApiKey: 'test-api-key',
|
||||
});
|
||||
|
||||
const content = 'Small content';
|
||||
const result = await compressOutput(content);
|
||||
|
||||
expect(result.wasCompressed).toBe(false);
|
||||
expect(result.compressed).toBe(content);
|
||||
});
|
||||
|
||||
it('should return original content when no API key is configured', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: { enabled: true },
|
||||
});
|
||||
(getSmartRoutingConfig as jest.Mock).mockResolvedValue({
|
||||
openaiApiKey: '',
|
||||
});
|
||||
|
||||
const content = 'x'.repeat(5000);
|
||||
const result = await compressOutput(content);
|
||||
|
||||
expect(result.wasCompressed).toBe(false);
|
||||
expect(result.compressed).toBe(content);
|
||||
});
|
||||
|
||||
it('should compress content when enabled and content is large', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: { enabled: true, model: 'gpt-4o-mini', maxInputTokens: 100000 },
|
||||
});
|
||||
(getSmartRoutingConfig as jest.Mock).mockResolvedValue({
|
||||
openaiApiKey: 'test-api-key',
|
||||
openaiApiBaseUrl: 'https://api.openai.com/v1',
|
||||
});
|
||||
|
||||
const originalContent = 'x'.repeat(5000);
|
||||
const compressedContent = 'y'.repeat(2000);
|
||||
|
||||
// Mock OpenAI response
|
||||
const mockCreate = jest.fn().mockResolvedValue({
|
||||
choices: [{ message: { content: compressedContent } }],
|
||||
});
|
||||
|
||||
(OpenAI as unknown as jest.Mock).mockImplementation(() => ({
|
||||
chat: {
|
||||
completions: {
|
||||
create: mockCreate,
|
||||
},
|
||||
},
|
||||
}));
|
||||
|
||||
const result = await compressOutput(originalContent, {
|
||||
toolName: 'test-tool',
|
||||
serverName: 'test-server',
|
||||
});
|
||||
|
||||
expect(result.wasCompressed).toBe(true);
|
||||
expect(result.compressed).toBe(compressedContent);
|
||||
expect(result.originalLength).toBe(originalContent.length);
|
||||
expect(result.compressedLength).toBe(compressedContent.length);
|
||||
});
|
||||
|
||||
it('should return original content when compressed is larger', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: { enabled: true, model: 'gpt-4o-mini', maxInputTokens: 100000 },
|
||||
});
|
||||
(getSmartRoutingConfig as jest.Mock).mockResolvedValue({
|
||||
openaiApiKey: 'test-api-key',
|
||||
openaiApiBaseUrl: 'https://api.openai.com/v1',
|
||||
});
|
||||
|
||||
const originalContent = 'x'.repeat(5000);
|
||||
const largerContent = 'y'.repeat(6000);
|
||||
|
||||
const mockCreate = jest.fn().mockResolvedValue({
|
||||
choices: [{ message: { content: largerContent } }],
|
||||
});
|
||||
|
||||
(OpenAI as unknown as jest.Mock).mockImplementation(() => ({
|
||||
chat: {
|
||||
completions: {
|
||||
create: mockCreate,
|
||||
},
|
||||
},
|
||||
}));
|
||||
|
||||
const result = await compressOutput(originalContent);
|
||||
|
||||
expect(result.wasCompressed).toBe(false);
|
||||
expect(result.compressed).toBe(originalContent);
|
||||
});
|
||||
|
||||
it('should return original content on API error', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: { enabled: true, model: 'gpt-4o-mini', maxInputTokens: 100000 },
|
||||
});
|
||||
(getSmartRoutingConfig as jest.Mock).mockResolvedValue({
|
||||
openaiApiKey: 'test-api-key',
|
||||
openaiApiBaseUrl: 'https://api.openai.com/v1',
|
||||
});
|
||||
|
||||
const mockCreate = jest.fn().mockRejectedValue(new Error('API error'));
|
||||
|
||||
(OpenAI as unknown as jest.Mock).mockImplementation(() => ({
|
||||
chat: {
|
||||
completions: {
|
||||
create: mockCreate,
|
||||
},
|
||||
},
|
||||
}));
|
||||
|
||||
const content = 'x'.repeat(5000);
|
||||
const result = await compressOutput(content);
|
||||
|
||||
expect(result.wasCompressed).toBe(false);
|
||||
expect(result.compressed).toBe(content);
|
||||
});
|
||||
});
|
||||
|
||||
describe('compressToolResult', () => {
|
||||
it('should return original result when compression is disabled', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: { enabled: false },
|
||||
});
|
||||
|
||||
const result = {
|
||||
content: [{ type: 'text', text: 'Test output' }],
|
||||
};
|
||||
|
||||
const compressed = await compressToolResult(result);
|
||||
|
||||
expect(compressed).toEqual(result);
|
||||
});
|
||||
|
||||
it('should not compress error results', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: { enabled: true },
|
||||
});
|
||||
(getSmartRoutingConfig as jest.Mock).mockResolvedValue({
|
||||
openaiApiKey: 'test-api-key',
|
||||
});
|
||||
|
||||
const result = {
|
||||
content: [{ type: 'text', text: 'Error message' }],
|
||||
isError: true,
|
||||
};
|
||||
|
||||
const compressed = await compressToolResult(result);
|
||||
|
||||
expect(compressed).toEqual(result);
|
||||
});
|
||||
|
||||
it('should handle results without content array', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: { enabled: true },
|
||||
});
|
||||
(getSmartRoutingConfig as jest.Mock).mockResolvedValue({
|
||||
openaiApiKey: 'test-api-key',
|
||||
});
|
||||
|
||||
const result = { someOtherField: 'value' };
|
||||
|
||||
const compressed = await compressToolResult(result);
|
||||
|
||||
expect(compressed).toEqual(result);
|
||||
});
|
||||
|
||||
it('should only compress text content items', async () => {
|
||||
mockSystemConfigDao.get.mockResolvedValue({
|
||||
compression: { enabled: true, maxInputTokens: 100000 },
|
||||
});
|
||||
(getSmartRoutingConfig as jest.Mock).mockResolvedValue({
|
||||
openaiApiKey: 'test-api-key',
|
||||
openaiApiBaseUrl: 'https://api.openai.com/v1',
|
||||
});
|
||||
|
||||
const largeText = 'x'.repeat(5000);
|
||||
const compressedText = 'y'.repeat(2000);
|
||||
|
||||
const mockCreate = jest.fn().mockResolvedValue({
|
||||
choices: [{ message: { content: compressedText } }],
|
||||
});
|
||||
|
||||
(OpenAI as unknown as jest.Mock).mockImplementation(() => ({
|
||||
chat: {
|
||||
completions: {
|
||||
create: mockCreate,
|
||||
},
|
||||
},
|
||||
}));
|
||||
|
||||
const result = {
|
||||
content: [
|
||||
{ type: 'text', text: largeText },
|
||||
{ type: 'image', data: 'base64data' },
|
||||
],
|
||||
};
|
||||
|
||||
const compressed = await compressToolResult(result);
|
||||
|
||||
expect(compressed.content[0].text).toBe(compressedText);
|
||||
expect(compressed.content[1]).toEqual({ type: 'image', data: 'base64data' });
|
||||
});
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user