Compare commits

..

2 Commits

Author SHA1 Message Date
Copilot
3e9e5cc3c9 feat: Auto-start Docker daemon when installed in container (#370)
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: samanhappy <2755122+samanhappy@users.noreply.github.com>
Co-authored-by: samanhappy <samanhappy@gmail.com>
2025-10-13 22:38:13 +08:00
samanhappy
16a92096b3 feat: Enhance package root detection and version retrieval using ESM-compatible methods (#371) 2025-10-13 22:36:29 +08:00
10 changed files with 351 additions and 84 deletions

View File

@@ -1,6 +1,6 @@
# Docker CLI Installation Test Procedure
# Docker Engine Installation Test Procedure
This document describes how to test the Docker CLI installation feature added with the `INSTALL_EXT=true` build argument.
This document describes how to test the Docker Engine installation feature added with the `INSTALL_EXT=true` build argument.
## Test 1: Build with INSTALL_EXT=false (default)
@@ -12,7 +12,7 @@ docker build -t mcphub:base .
docker run --rm mcphub:base docker --version
```
**Expected Result**: `docker: not found` error (Docker CLI is NOT installed)
**Expected Result**: `docker: not found` error (Docker is NOT installed)
## Test 2: Build with INSTALL_EXT=true
@@ -26,13 +26,44 @@ docker run --rm mcphub:extended docker --version
**Expected Result**: Docker version output (e.g., `Docker version 27.x.x, build xxxxx`)
## Test 3: Docker-in-Docker Workflow
## Test 3: Docker-in-Docker with Auto-start Daemon
```bash
# Build with extended features
docker build --build-arg INSTALL_EXT=true -t mcphub:extended .
# Run with Docker socket mounted
# Run with privileged mode (allows Docker daemon to start)
docker run -d \
--name mcphub-test \
--privileged \
-p 3000:3000 \
mcphub:extended
# Wait a few seconds for daemon to start
sleep 5
# Test Docker commands from inside the container
docker exec mcphub-test docker ps
docker exec mcphub-test docker images
docker exec mcphub-test docker info
# Cleanup
docker stop mcphub-test
docker rm mcphub-test
```
**Expected Result**:
- Docker daemon should auto-start inside the container
- Docker commands should work without mounting the host's Docker socket
- `docker info` should show the container's own Docker daemon
## Test 4: Docker-in-Docker with Host Socket (Alternative)
```bash
# Build with extended features
docker build --build-arg INSTALL_EXT=true -t mcphub:extended .
# Run with Docker socket mounted (uses host's daemon)
docker run -d \
--name mcphub-test \
-p 3000:3000 \
@@ -48,9 +79,11 @@ docker stop mcphub-test
docker rm mcphub-test
```
**Expected Result**: Docker commands should work and show the host's containers and images
**Expected Result**:
- Docker daemon should NOT auto-start (socket already exists from host)
- Docker commands should work and show the host's containers and images
## Test 4: Verify Image Size
## Test 5: Verify Image Size
```bash
# Build both versions
@@ -63,9 +96,9 @@ docker images mcphub:*
**Expected Result**:
- The `extended` image should be larger than the `base` image
- The size difference should be reasonable (Docker CLI adds ~60-80MB)
- The size difference should be reasonable (Docker Engine adds ~100-150MB)
## Test 5: Architecture Support
## Test 6: Architecture Support
```bash
# On AMD64/x86_64
@@ -77,12 +110,15 @@ docker build --build-arg INSTALL_EXT=true --platform linux/arm64 -t mcphub:exten
**Expected Result**:
- Both builds should succeed
- AMD64 includes Chrome/Playwright + Docker CLI
- ARM64 includes Docker CLI only (Chrome installation is skipped)
- AMD64 includes Chrome/Playwright + Docker Engine
- ARM64 includes Docker Engine only (Chrome installation is skipped)
## Notes
- The Docker CLI installation follows the official Docker documentation
- The Docker Engine installation follows the official Docker documentation
- Includes full Docker daemon (`dockerd`), CLI (`docker`), and containerd
- The daemon auto-starts when running in privileged mode
- The installation uses the Debian Bookworm repository
- All temporary files are cleaned up to minimize image size
- The feature is opt-in via the `INSTALL_EXT` build argument
- `iptables` is installed as it's required for Docker networking

View File

@@ -22,15 +22,15 @@ RUN if [ "$INSTALL_EXT" = "true" ]; then \
else \
echo "Skipping Chrome installation on non-amd64 architecture: $ARCH"; \
fi; \
# Install Docker CLI \
# Install Docker Engine (includes CLI and daemon) \
apt-get update && \
apt-get install -y ca-certificates curl && \
apt-get install -y ca-certificates curl iptables && \
install -m 0755 -d /etc/apt/keyrings && \
curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc && \
chmod a+r /etc/apt/keyrings/docker.asc && \
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian bookworm stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null && \
apt-get update && \
apt-get install -y docker-ce-cli && \
apt-get install -y docker-ce docker-ce-cli containerd.io && \
apt-get clean && rm -rf /var/lib/apt/lists/*; \
fi

View File

@@ -46,10 +46,18 @@ docker run -d \
The Docker image supports an `INSTALL_EXT` build argument to include additional tools:
```bash
# Build with extended features (includes Docker CLI, Chrome/Playwright)
# Build with extended features (includes Docker Engine, Chrome/Playwright)
docker build --build-arg INSTALL_EXT=true -t mcphub:extended .
# Run the container with Docker socket mounted (for Docker-in-Docker workflows)
# Option 1: Run with automatic Docker-in-Docker (requires privileged mode)
docker run -d \
--name mcphub \
--privileged \
-p 3000:3000 \
-v $(pwd)/mcp_settings.json:/app/mcp_settings.json \
mcphub:extended
# Option 2: Run with Docker socket mounted (use host's Docker daemon)
docker run -d \
--name mcphub \
-p 3000:3000 \
@@ -57,20 +65,24 @@ docker run -d \
-v /var/run/docker.sock:/var/run/docker.sock \
mcphub:extended
# Verify Docker CLI is available
# Verify Docker is available
docker exec mcphub docker --version
docker exec mcphub docker ps
```
<Note>
**What's included with INSTALL_EXT=true:**
- **Docker CLI**: For container management and Docker-based workflows
- **Docker Engine**: Full Docker daemon with CLI for container management. The daemon auto-starts when the container runs in privileged mode.
- **Chrome/Playwright** (amd64 only): For browser automation tasks
The extended image is larger but provides additional capabilities for advanced use cases.
</Note>
<Warning>
When mounting the Docker socket (`/var/run/docker.sock`), the container gains access to the host's Docker daemon. Only use this in trusted environments.
**Docker-in-Docker Security Considerations:**
- **Privileged mode** (`--privileged`): Required for the Docker daemon to start inside the container. This gives the container elevated permissions on the host.
- **Docker socket mounting** (`/var/run/docker.sock`): Gives the container access to the host's Docker daemon. Both approaches should only be used in trusted environments.
- For production, consider using Docker socket mounting instead of privileged mode for better security.
</Warning>
## Docker Compose Setup

View File

@@ -46,10 +46,18 @@ docker run -d \
Docker 镜像支持 `INSTALL_EXT` 构建参数以包含额外工具:
```bash
# 构建扩展功能版本(包含 Docker CLI、Chrome/Playwright
# 构建扩展功能版本(包含 Docker 引擎、Chrome/Playwright
docker build --build-arg INSTALL_EXT=true -t mcphub:extended .
# 运行容器并挂载 Docker socket用于 Docker-in-Docker 工作流
# 方式 1: 使用自动 Docker-in-Docker(需要特权模式
docker run -d \
--name mcphub \
--privileged \
-p 3000:3000 \
-v $(pwd)/mcp_settings.json:/app/mcp_settings.json \
mcphub:extended
# 方式 2: 挂载 Docker socket使用宿主机的 Docker 守护进程)
docker run -d \
--name mcphub \
-p 3000:3000 \
@@ -57,20 +65,24 @@ docker run -d \
-v /var/run/docker.sock:/var/run/docker.sock \
mcphub:extended
# 验证 Docker CLI 可用
# 验证 Docker 可用
docker exec mcphub docker --version
docker exec mcphub docker ps
```
<Note>
**INSTALL_EXT=true 包含的功能:**
- **Docker CLI**:用于容器管理和基于 Docker 的工作流
- **Docker 引擎**:完整的 Docker 守护进程和 CLI用于容器管理。在特权模式下运行时守护进程会自动启动。
- **Chrome/Playwright**(仅 amd64用于浏览器自动化任务
扩展镜像较大,但为高级用例提供了额外功能。
</Note>
<Warning>
挂载 Docker socket`/var/run/docker.sock`)时,容器将获得访问主机 Docker 守护进程的权限。仅在可信环境中使用此功能。
**Docker-in-Docker 安全注意事项:**
- **特权模式**`--privileged`):容器内启动 Docker 守护进程需要此权限。这会授予容器在宿主机上的提升权限。
- **Docker socket 挂载**`/var/run/docker.sock`):使容器可以访问宿主机的 Docker 守护进程。两种方式都应仅在可信环境中使用。
- 生产环境建议使用 Docker socket 挂载而非特权模式,以提高安全性。
</Warning>
## Docker Compose 设置

View File

@@ -4,7 +4,7 @@ NPM_REGISTRY=${NPM_REGISTRY:-https://registry.npmjs.org/}
echo "Setting npm registry to ${NPM_REGISTRY}"
npm config set registry "$NPM_REGISTRY"
# 处理 HTTP_PROXY HTTPS_PROXY 环境变量
# Handle HTTP_PROXY and HTTPS_PROXY environment variables
if [ -n "$HTTP_PROXY" ]; then
echo "Setting HTTP proxy to ${HTTP_PROXY}"
npm config set proxy "$HTTP_PROXY"
@@ -19,4 +19,33 @@ fi
echo "Using REQUEST_TIMEOUT: $REQUEST_TIMEOUT"
# Auto-start Docker daemon if Docker is installed
if command -v dockerd >/dev/null 2>&1; then
echo "Docker daemon detected, starting dockerd..."
# Create docker directory if it doesn't exist
mkdir -p /var/lib/docker
# Start dockerd in the background
dockerd --host=unix:///var/run/docker.sock --storage-driver=vfs > /var/log/dockerd.log 2>&1 &
# Wait for Docker daemon to be ready
echo "Waiting for Docker daemon to be ready..."
TIMEOUT=15
ELAPSED=0
while ! docker info >/dev/null 2>&1; do
if [ $ELAPSED -ge $TIMEOUT ]; then
echo "WARNING: Docker daemon failed to start within ${TIMEOUT} seconds"
echo "Check /var/log/dockerd.log for details"
break
fi
sleep 1
ELAPSED=$((ELAPSED + 1))
done
if docker info >/dev/null 2>&1; then
echo "Docker daemon started successfully"
fi
fi
exec "$@"

View File

@@ -15,9 +15,26 @@ import {
} from './services/sseService.js';
import { initializeDefaultUser } from './models/User.js';
import { sseUserContextMiddleware } from './middlewares/userContext.js';
import { findPackageRoot } from './utils/path.js';
import { getCurrentModuleDir } from './utils/moduleDir.js';
// Get the current working directory (will be project root in most cases)
const currentFileDir = process.cwd() + '/src';
/**
* Get the directory of the current module
* This is wrapped in a function to allow easy mocking in test environments
*/
function getCurrentFileDir(): string {
// In test environments, use process.cwd() to avoid import.meta issues
if (process.env.NODE_ENV === 'test' || process.env.JEST_WORKER_ID !== undefined) {
return process.cwd();
}
try {
return getCurrentModuleDir();
} catch {
// Fallback for environments where import.meta might not be available
return process.cwd();
}
}
export class AppServer {
private app: express.Application;
@@ -167,10 +184,11 @@ export class AppServer {
private findFrontendDistPath(): string | null {
// Debug flag for detailed logging
const debug = process.env.DEBUG === 'true';
const currentDir = getCurrentFileDir();
if (debug) {
console.log('DEBUG: Current directory:', process.cwd());
console.log('DEBUG: Script directory:', currentFileDir);
console.log('DEBUG: Script directory:', currentDir);
}
// First, find the package root directory
@@ -205,51 +223,9 @@ export class AppServer {
// Helper method to find the package root (where package.json is located)
private findPackageRoot(): string | null {
const debug = process.env.DEBUG === 'true';
// Possible locations for package.json
const possibleRoots = [
// Standard npm package location
path.resolve(currentFileDir, '..', '..'),
// Current working directory
process.cwd(),
// When running from dist directory
path.resolve(currentFileDir, '..'),
// When installed via npx
path.resolve(currentFileDir, '..', '..', '..'),
];
// Special handling for npx
if (process.argv[1] && process.argv[1].includes('_npx')) {
const npxDir = path.dirname(process.argv[1]);
possibleRoots.unshift(path.resolve(npxDir, '..'));
}
if (debug) {
console.log('DEBUG: Checking for package.json in:', possibleRoots);
}
for (const root of possibleRoots) {
const packageJsonPath = path.join(root, 'package.json');
if (fs.existsSync(packageJsonPath)) {
try {
const pkg = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
if (pkg.name === 'mcphub' || pkg.name === '@samanhappy/mcphub') {
if (debug) {
console.log(`DEBUG: Found package.json at ${packageJsonPath}`);
}
return root;
}
} catch (e) {
if (debug) {
console.error(`DEBUG: Failed to parse package.json at ${packageJsonPath}:`, e);
}
// Continue to the next potential root
}
}
}
return null;
// Use the shared utility function which properly handles ESM module paths
const currentDir = getCurrentFileDir();
return findPackageRoot(currentDir);
}
}

11
src/utils/moduleDir.ts Normal file
View File

@@ -0,0 +1,11 @@
import { fileURLToPath } from 'url';
import path from 'path';
/**
* Get the directory of the current module
* This is in a separate file to allow mocking in test environments
*/
export function getCurrentModuleDir(): string {
const currentModuleFile = fileURLToPath(import.meta.url);
return path.dirname(currentModuleFile);
}

View File

@@ -1,10 +1,171 @@
import fs from 'fs';
import path from 'path';
import { dirname } from 'path';
import { getCurrentModuleDir } from './moduleDir.js';
// Project root directory - use process.cwd() as a simpler alternative
const rootDir = process.cwd();
// Cache the package root for performance
let cachedPackageRoot: string | null | undefined = undefined;
/**
* Initialize package root by trying to find it using the module directory
* This should be called when the module is first loaded
*/
function initializePackageRoot(): void {
// Skip initialization in test environments
if (process.env.NODE_ENV === 'test' || process.env.JEST_WORKER_ID !== undefined) {
return;
}
try {
// Try to get the current module's directory
const currentModuleDir = getCurrentModuleDir();
// This file is in src/utils/path.ts (or dist/utils/path.js when compiled)
// So package.json should be 2 levels up
const possibleRoots = [
path.resolve(currentModuleDir, '..', '..'), // dist -> package root
path.resolve(currentModuleDir, '..'), // dist/utils -> dist -> package root
];
for (const root of possibleRoots) {
const packageJsonPath = path.join(root, 'package.json');
if (fs.existsSync(packageJsonPath)) {
try {
const pkg = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
if (pkg.name === 'mcphub' || pkg.name === '@samanhappy/mcphub') {
cachedPackageRoot = root;
return;
}
} catch {
// Continue checking
}
}
}
} catch {
// If initialization fails, cachedPackageRoot remains undefined
// and findPackageRoot will search normally
}
}
// Initialize on module load (unless in test environment)
initializePackageRoot();
/**
* Find the package root directory (where package.json is located)
* This works correctly when the package is installed globally or locally
* @param startPath Starting path to search from (defaults to checking module paths)
* @returns The package root directory path, or null if not found
*/
export const findPackageRoot = (startPath?: string): string | null => {
// Return cached value if available and no specific start path is requested
if (cachedPackageRoot !== undefined && !startPath) {
return cachedPackageRoot;
}
const debug = process.env.DEBUG === 'true';
// Possible locations for package.json relative to the search path
const possibleRoots: string[] = [];
if (startPath) {
// When start path is provided (from fileURLToPath(import.meta.url))
possibleRoots.push(
// When in dist/utils (compiled code) - go up 2 levels
path.resolve(startPath, '..', '..'),
// When in dist/ (compiled code) - go up 1 level
path.resolve(startPath, '..'),
// Direct parent directories
path.resolve(startPath)
);
}
// Try to use require.resolve to find the module location (works in CommonJS and ESM with createRequire)
try {
// In ESM, we can use import.meta.resolve, but it's async in some versions
// So we'll try to find the module by checking the node_modules structure
// Check if this file is in a node_modules installation
const currentFile = new Error().stack?.split('\n')[2]?.match(/\((.+?):\d+:\d+\)$/)?.[1];
if (currentFile) {
const nodeModulesIndex = currentFile.indexOf('node_modules');
if (nodeModulesIndex !== -1) {
// Extract the package path from node_modules
const afterNodeModules = currentFile.substring(nodeModulesIndex + 'node_modules'.length + 1);
const packageNameEnd = afterNodeModules.indexOf(path.sep);
if (packageNameEnd !== -1) {
const packagePath = currentFile.substring(0, nodeModulesIndex + 'node_modules'.length + 1 + packageNameEnd);
possibleRoots.push(packagePath);
}
}
}
} catch {
// Ignore errors
}
// Check module.filename location (works in Node.js when available)
if (typeof __filename !== 'undefined') {
const moduleDir = path.dirname(__filename);
possibleRoots.push(
path.resolve(moduleDir, '..', '..'),
path.resolve(moduleDir, '..')
);
}
// Check common installation locations
possibleRoots.push(
// Current working directory (for development/tests)
process.cwd(),
// Parent of cwd
path.resolve(process.cwd(), '..')
);
if (debug) {
console.log('DEBUG: Searching for package.json from:', startPath || 'multiple locations');
console.log('DEBUG: Checking paths:', possibleRoots);
}
// Remove duplicates
const uniqueRoots = [...new Set(possibleRoots)];
for (const root of uniqueRoots) {
const packageJsonPath = path.join(root, 'package.json');
if (fs.existsSync(packageJsonPath)) {
try {
const pkg = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
if (pkg.name === 'mcphub' || pkg.name === '@samanhappy/mcphub') {
if (debug) {
console.log(`DEBUG: Found package.json at ${packageJsonPath}`);
}
// Cache the result if no specific start path was requested
if (!startPath) {
cachedPackageRoot = root;
}
return root;
}
} catch (e) {
// Continue to the next potential root
if (debug) {
console.error(`DEBUG: Failed to parse package.json at ${packageJsonPath}:`, e);
}
}
}
}
if (debug) {
console.warn('DEBUG: Could not find package root directory');
}
// Cache null result as well to avoid repeated searches
if (!startPath) {
cachedPackageRoot = null;
}
return null;
};
function getParentPath(p: string, filename: string): string {
if (p.endsWith(filename)) {
p = p.slice(0, -filename.length);
@@ -40,22 +201,36 @@ export const getConfigFilePath = (filename: string, description = 'Configuration
}
const potentialPaths = [
...[
// Prioritize process.cwd() as the first location to check
path.resolve(process.cwd(), filename),
// Use path relative to the root directory
path.join(rootDir, filename),
// If installed with npx, may need to look one level up
path.join(dirname(rootDir), filename),
],
// Prioritize process.cwd() as the first location to check
path.resolve(process.cwd(), filename),
// Use path relative to the root directory
path.join(rootDir, filename),
// If installed with npx, may need to look one level up
path.join(dirname(rootDir), filename),
];
// Also check in the installed package root directory
const packageRoot = findPackageRoot();
if (packageRoot) {
potentialPaths.push(path.join(packageRoot, filename));
}
for (const filePath of potentialPaths) {
if (fs.existsSync(filePath)) {
return filePath;
}
}
// If all paths do not exist, check if we have a fallback in the package root
// If the file exists in the package root, use it as the default
if (packageRoot) {
const packageConfigPath = path.join(packageRoot, filename);
if (fs.existsSync(packageConfigPath)) {
console.log(`Using ${description} from package: ${packageConfigPath}`);
return packageConfigPath;
}
}
// If all paths do not exist, use default path
// Using the default path is acceptable because it ensures the application can proceed
// even if the configuration file is missing. This fallback is particularly useful in

View File

@@ -1,13 +1,24 @@
import fs from 'fs';
import path from 'path';
import { findPackageRoot } from './path.js';
/**
* Gets the package version from package.json
* @param searchPath Optional path to start searching from (defaults to cwd)
* @returns The version string from package.json, or 'dev' if not found
*/
export const getPackageVersion = (): string => {
export const getPackageVersion = (searchPath?: string): string => {
try {
const packageJsonPath = path.resolve(process.cwd(), 'package.json');
// Use provided path or fallback to current working directory
const startPath = searchPath || process.cwd();
const packageRoot = findPackageRoot(startPath);
if (!packageRoot) {
console.warn('Could not find package root, using default version');
return 'dev';
}
const packageJsonPath = path.join(packageRoot, 'package.json');
const packageJsonContent = fs.readFileSync(packageJsonPath, 'utf8');
const packageJson = JSON.parse(packageJsonContent);
return packageJson.version || 'dev';

View File

@@ -8,6 +8,11 @@ Object.assign(process.env, {
DATABASE_URL: 'sqlite::memory:',
});
// Mock moduleDir to avoid import.meta parsing issues in Jest
jest.mock('../src/utils/moduleDir.js', () => ({
getCurrentModuleDir: jest.fn(() => process.cwd()),
}));
// Global test utilities
declare global {
// eslint-disable-next-line @typescript-eslint/no-namespace