Merge branch 'main' into main

This commit is contained in:
Pierluigi Viti
2025-09-25 10:22:49 +02:00
committed by GitHub
35 changed files with 2850 additions and 957 deletions

51
.github/workflows/claude.yml vendored Normal file
View File

@@ -0,0 +1,51 @@
name: Claude Code
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
issues:
types: [opened, assigned]
pull_request_review:
types: [submitted]
jobs:
claude:
if: |
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
issues: read
id-token: write
actions: read
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@beta
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
# Allow Claude to read CI results on PRs
additional_permissions: |
actions: read
# Trigger when assigned to an issue
assignee_trigger: "claude"
# Allow Claude to run bash
# This should be safe given the repo is already public
allowed_tools: "Bash"
custom_instructions: |
If posting a comment to GitHub, give a concise summary of the comment at the top and put all the details in a <details> block.

View File

@@ -8,6 +8,7 @@ on:
jobs:
create-metadata:
runs-on: ubuntu-latest
if: github.repository_owner == 'modelcontextprotocol'
outputs:
hash: ${{ steps.last-release.outputs.hash }}
version: ${{ steps.create-version.outputs.version}}
@@ -104,6 +105,7 @@ jobs:
publish-pypi:
needs: [update-packages, create-metadata]
if: ${{ needs.create-metadata.outputs.pypi_packages != '[]' && needs.create-metadata.outputs.pypi_packages != '' }}
strategy:
fail-fast: false
matrix:
@@ -145,6 +147,7 @@ jobs:
publish-npm:
needs: [update-packages, create-metadata]
if: ${{ needs.create-metadata.outputs.npm_packages != '[]' && needs.create-metadata.outputs.npm_packages != '' }}
strategy:
fail-fast: false
matrix:
@@ -190,7 +193,10 @@ jobs:
create-release:
needs: [update-packages, create-metadata, publish-pypi, publish-npm]
if: needs.update-packages.outputs.changes_made == 'true'
if: |
always() &&
needs.update-packages.outputs.changes_made == 'true' &&
(needs.publish-pypi.result == 'success' || needs.publish-npm.result == 'success')
runs-on: ubuntu-latest
environment: release
permissions:

3
.gitignore vendored
View File

@@ -122,6 +122,9 @@ dist
# Stores VSCode versions used for testing VSCode extensions
.vscode-test
# Jetbrains IDEs
.idea/
# yarn v2
.yarn/cache
.yarn/unplugged

View File

@@ -1,105 +1,34 @@
# Contributing to MCP Servers
Thank you for your interest in contributing to the Model Context Protocol (MCP) servers! This document provides guidelines and instructions for contributing.
Thanks for your interest in contributing! Here's how you can help make this repo better.
## Types of Contributions
We accept changes through [the standard GitHub flow model](https://docs.github.com/en/get-started/using-github/github-flow).
### 1. New Servers
## Server Listings
The repository contains reference implementations, as well as a list of community servers.
We generally don't accept new servers into the repository. We do accept pull requests to the [README.md](./README.md)
adding a reference to your servers.
We welcome PRs that add links to your servers in the [README.md](./README.md)!
Please keep lists in alphabetical order to minimize merge conflicts when adding new items.
## Server Implementations
- Check the [modelcontextprotocol.io](https://modelcontextprotocol.io) documentation
- Ensure your server doesn't duplicate existing functionality
- Consider whether your server would be generally useful to others
- Follow [security best practices](https://modelcontextprotocol.io/docs/concepts/transports#security-considerations) from the MCP documentation
- Create a PR adding a link to your server to the [README.md](./README.md).
We welcome:
- **Bug fixes** — Help us squash those pesky bugs.
- **Usability improvements** — Making servers easier to use for humans and agents.
- **Enhancements that demonstrate MCP protocol features** — We encourage contributions that help reference servers better illustrate underutilized aspects of the MCP protocol beyond just Tools, such as Resources, Prompts, or Roots. For example, adding Roots support to filesystem-server helps showcase this important but lesser-known feature.
### 2. Improvements to Existing Servers
Enhancements to existing servers are welcome! This includes:
We're more selective about:
- **Other new features** — Especially if they're not crucial to the server's core purpose or are highly opinionated. The existing servers are reference servers meant to inspire the community. If you need specific features, we encourage you to build enhanced versions! We think a diverse ecosystem of servers is beneficial for everyone, and would love to link to your improved server in our README.
- Bug fixes
- Performance improvements
- New features
- Security enhancements
We don't accept:
- **New server implementations** — We encourage you to publish them yourself, and link to them from the README.
### 3. Documentation
Documentation improvements are always welcome:
## Documentation
- Fixing typos or unclear instructions
- Adding examples
- Improving setup instructions
- Adding troubleshooting guides
Improvements to existing documentation is welcome - although generally we'd prefer ergonomic improvements than documenting pain points if possible!
## Getting Started
1. Fork the repository
2. Clone your fork:
```bash
git clone https://github.com/your-username/servers.git
```
3. Add the upstream remote:
```bash
git remote add upstream https://github.com/modelcontextprotocol/servers.git
```
4. Create a branch:
```bash
git checkout -b my-feature
```
## Development Guidelines
### Code Style
- Follow the existing code style in the repository
- Include appropriate type definitions
- Add comments for complex logic
### Documentation
- Include a detailed README.md in your server directory
- Document all configuration options
- Provide setup instructions
- Include usage examples
### Security
- Follow security best practices
- Implement proper input validation
- Handle errors appropriately
- Document security considerations
## Submitting Changes
1. Commit your changes:
```bash
git add .
git commit -m "Description of changes"
```
2. Push to your fork:
```bash
git push origin my-feature
```
3. Create a Pull Request through GitHub
### Pull Request Guidelines
- Thoroughly test your changes
- Fill out the pull request template completely
- Link any related issues
- Provide clear description of changes
- Include any necessary documentation updates
- Add screenshots for UI changes
- List any breaking changes
We're more selective about adding wholly new documentation, especially in ways that aren't vendor neutral (e.g. how to run a particular server with a particular client).
## Community
- Participate in [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions)
- Follow the [Code of Conduct](CODE_OF_CONDUCT.md)
[Learn how the MCP community communicates](https://modelcontextprotocol.io/community/communication).
## Questions?
- Check the [documentation](https://modelcontextprotocol.io)
- Ask in GitHub Discussions
Thank you for contributing to MCP Servers!
Thank you for helping make MCP servers better for everyone!

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2024 Anthropic, PBC
Copyright (c) 2025 Anthropic, PBC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

579
README.md

File diff suppressed because it is too large Load Diff

51
package-lock.json generated
View File

@@ -1376,6 +1376,16 @@
"@types/node": "*"
}
},
"node_modules/@types/cors": {
"version": "2.8.19",
"resolved": "https://registry.npmjs.org/@types/cors/-/cors-2.8.19.tgz",
"integrity": "sha512-mFNylyeyqN93lfe/9CSxOGREz8cpzAhH+E93xJ4xWQf62V8sQ/24reV2nyzUWM6H6Xji+GGHpkbLe7pVoUEskg==",
"dev": true,
"license": "MIT",
"dependencies": {
"@types/node": "*"
}
},
"node_modules/@types/diff": {
"version": "5.2.3",
"resolved": "https://registry.npmjs.org/@types/diff/-/diff-5.2.3.tgz",
@@ -1849,10 +1859,11 @@
}
},
"node_modules/brace-expansion": {
"version": "1.1.11",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.11.tgz",
"integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==",
"version": "1.1.12",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz",
"integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==",
"dev": true,
"license": "MIT",
"dependencies": {
"balanced-match": "^1.0.0",
"concat-map": "0.0.1"
@@ -2596,9 +2607,9 @@
}
},
"node_modules/filelist/node_modules/brace-expansion": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.1.tgz",
"integrity": "sha512-XnAIvQ8eM+kC6aULx6wuQiwVsnzsi9d3WxzV3FpWTGA19F621kwdbsAcFKXgKUHZWsy+mY6iL1sHTxWEFCytDA==",
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.2.tgz",
"integrity": "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==",
"dev": true,
"license": "MIT",
"dependencies": {
@@ -5817,7 +5828,8 @@
"version": "0.6.2",
"license": "MIT",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.12.0",
"@modelcontextprotocol/sdk": "^1.18.0",
"cors": "^2.8.5",
"express": "^4.21.1",
"zod": "^3.23.8",
"zod-to-json-schema": "^3.23.5"
@@ -5826,15 +5838,16 @@
"mcp-server-everything": "dist/index.js"
},
"devDependencies": {
"@types/cors": "^2.8.19",
"@types/express": "^5.0.0",
"shx": "^0.3.4",
"typescript": "^5.6.2"
}
},
"src/everything/node_modules/@modelcontextprotocol/sdk": {
"version": "1.12.3",
"resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.12.3.tgz",
"integrity": "sha512-DyVYSOafBvk3/j1Oka4z5BWT8o4AFmoNyZY9pALOm7Lh3GZglR71Co4r4dEUoqDWdDazIZQHBe7J2Nwkg6gHgQ==",
"version": "1.18.0",
"resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.18.0.tgz",
"integrity": "sha512-JvKyB6YwS3quM+88JPR0axeRgvdDu3Pv6mdZUy+w4qVkCzGgumb9bXG/TmtDRQv+671yaofVfXSQmFLlWU5qPQ==",
"license": "MIT",
"dependencies": {
"ajv": "^6.12.6",
@@ -5842,6 +5855,7 @@
"cors": "^2.8.5",
"cross-spawn": "^7.0.5",
"eventsource": "^3.0.2",
"eventsource-parser": "^3.0.0",
"express": "^5.0.1",
"express-rate-limit": "^7.5.0",
"pkce-challenge": "^5.0.0",
@@ -6156,10 +6170,10 @@
},
"src/filesystem": {
"name": "@modelcontextprotocol/server-filesystem",
"version": "0.6.2",
"version": "0.6.3",
"license": "MIT",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.12.3",
"@modelcontextprotocol/sdk": "^1.17.0",
"diff": "^5.1.0",
"glob": "^10.3.10",
"minimatch": "^10.0.1",
@@ -6182,9 +6196,9 @@
}
},
"src/filesystem/node_modules/@modelcontextprotocol/sdk": {
"version": "1.12.3",
"resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.12.3.tgz",
"integrity": "sha512-DyVYSOafBvk3/j1Oka4z5BWT8o4AFmoNyZY9pALOm7Lh3GZglR71Co4r4dEUoqDWdDazIZQHBe7J2Nwkg6gHgQ==",
"version": "1.17.0",
"resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.17.0.tgz",
"integrity": "sha512-qFfbWFA7r1Sd8D697L7GkTd36yqDuTkvz0KfOGkgXR8EUhQn3/EDNIR/qUdQNMT8IjmasBvHWuXeisxtXTQT2g==",
"license": "MIT",
"dependencies": {
"ajv": "^6.12.6",
@@ -6192,6 +6206,7 @@
"cors": "^2.8.5",
"cross-spawn": "^7.0.5",
"eventsource": "^3.0.2",
"eventsource-parser": "^3.0.0",
"express": "^5.0.1",
"express-rate-limit": "^7.5.0",
"pkce-challenge": "^5.0.0",
@@ -6237,9 +6252,9 @@
}
},
"src/filesystem/node_modules/brace-expansion": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.1.tgz",
"integrity": "sha512-XnAIvQ8eM+kC6aULx6wuQiwVsnzsi9d3WxzV3FpWTGA19F621kwdbsAcFKXgKUHZWsy+mY6iL1sHTxWEFCytDA==",
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.2.tgz",
"integrity": "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==",
"license": "MIT",
"dependencies": {
"balanced-match": "^1.0.0"

View File

@@ -27,24 +27,24 @@ This MCP server attempts to exercise all the features of the MCP protocol. It is
- Returns: Completion message with duration and steps
- Sends progress notifications during execution
4. `sampleLLM`
4. `printEnv`
- Prints all environment variables
- Useful for debugging MCP server configuration
- No inputs required
- Returns: JSON string of all environment variables
5. `sampleLLM`
- Demonstrates LLM sampling capability using MCP sampling feature
- Inputs:
- `prompt` (string): The prompt to send to the LLM
- `maxTokens` (number, default: 100): Maximum tokens to generate
- Returns: Generated LLM response
5. `getTinyImage`
6. `getTinyImage`
- Returns a small test image
- No inputs required
- Returns: Base64 encoded PNG image data
6. `printEnv`
- Prints all environment variables
- Useful for debugging MCP server configuration
- No inputs required
- Returns: JSON string of all environment variables
7. `annotatedMessage`
- Demonstrates how annotations can be used to provide metadata about content
- Inputs:
@@ -80,6 +80,22 @@ This MCP server attempts to exercise all the features of the MCP protocol. It is
- `pets` (enum): Favorite pet
- Returns: Confirmation of the elicitation demo with selection summary.
10. `structuredContent`
- Demonstrates a tool returning structured content using the example in the specification
- Provides an output schema to allow testing of client SHOULD advisory to validate the result using the schema
- Inputs:
- `location` (string): A location or ZIP code, mock data is returned regardless of value
- Returns: a response with
- `structuredContent` field conformant to the output schema
- A backward compatible Text Content field, a SHOULD advisory in the specification
11. `listRoots`
- Lists the current MCP roots provided by the client
- Demonstrates the roots protocol capability even though this server doesn't access files
- No inputs required
- Returns: List of current roots with their URIs and names, or a message if no roots are set
- Shows how servers can interact with the MCP roots protocol
### Resources
The server provides 100 test resources in two formats:
@@ -108,7 +124,7 @@ Resource features:
2. `complex_prompt`
- Advanced prompt demonstrating argument handling
- Required arguments:
- `temperature` (number): Temperature setting
- `temperature` (string): Temperature setting
- Optional arguments:
- `style` (string): Output style preference
- Returns: Multi-turn conversation with images
@@ -120,6 +136,18 @@ Resource features:
- Returns: Multi-turn conversation with an embedded resource reference
- Shows how to include resources directly in prompt messages
### Roots
The server demonstrates the MCP roots protocol capability:
- Declares `roots: { listChanged: true }` capability to indicate support for roots
- Handles `roots/list_changed` notifications from clients
- Requests initial roots during server initialization
- Provides a `listRoots` tool to display current roots
- Logs roots-related events for demonstration purposes
Note: This server doesn't actually access files, but demonstrates how servers can interact with the roots protocol for clients that need to understand which directories are available for file operations.
### Logging
The server sends random-leveled log messages every 15 seconds, e.g.:
@@ -160,22 +188,24 @@ For quick installation, use of of the one-click install buttons below...
[![Install with Docker in VS Code](https://img.shields.io/badge/VS_Code-Docker-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=everything&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-i%22%2C%22--rm%22%2C%22mcp%2Feverything%22%5D%7D) [![Install with Docker in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Docker-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=everything&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-i%22%2C%22--rm%22%2C%22mcp%2Feverything%22%5D%7D&quality=insiders)
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing `Ctrl + Shift + P` and typing `Preferences: Open User Settings (JSON)`.
For manual installation, you can configure the MCP server using one of these methods:
Optionally, you can add it to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
**Method 1: User Configuration (Recommended)**
Add the configuration to your user-level MCP configuration file. Open the Command Palette (`Ctrl + Shift + P`) and run `MCP: Open User Configuration`. This will open your user `mcp.json` file where you can add the server configuration.
> Note that the `mcp` key is not needed in the `.vscode/mcp.json` file.
**Method 2: Workspace Configuration**
Alternatively, you can add the configuration to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
> For more details about MCP configuration in VS Code, see the [official VS Code MCP documentation](https://code.visualstudio.com/docs/copilot/mcp).
#### NPX
```json
{
"mcp": {
"servers": {
"everything": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-everything"]
}
"servers": {
"everything": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-everything"]
}
}
}

View File

@@ -1,6 +1,7 @@
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import {
CallToolRequestSchema,
ClientCapabilities,
CompleteRequestSchema,
CreateMessageRequest,
CreateMessageResultSchema,
@@ -12,11 +13,12 @@ import {
LoggingLevel,
ReadResourceRequestSchema,
Resource,
SetLevelRequestSchema,
RootsListChangedNotificationSchema,
SubscribeRequestSchema,
Tool,
ToolSchema,
UnsubscribeRequestSchema,
type Root
} from "@modelcontextprotocol/sdk/types.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
@@ -31,6 +33,9 @@ const instructions = readFileSync(join(__dirname, "instructions.md"), "utf-8");
const ToolInputSchema = ToolSchema.shape.inputSchema;
type ToolInput = z.infer<typeof ToolInputSchema>;
const ToolOutputSchema = ToolSchema.shape.outputSchema;
type ToolOutput = z.infer<typeof ToolOutputSchema>;
/* Input schemas for tools implemented in this server */
const EchoSchema = z.object({
message: z.string().describe("Message to echo"),
@@ -46,7 +51,10 @@ const LongRunningOperationSchema = z.object({
.number()
.default(10)
.describe("Duration of the operation in seconds"),
steps: z.number().default(5).describe("Number of steps in the operation"),
steps: z
.number()
.default(5)
.describe("Number of steps in the operation"),
});
const PrintEnvSchema = z.object({});
@@ -59,13 +67,6 @@ const SampleLLMSchema = z.object({
.describe("Maximum number of tokens to generate"),
});
// Example completion values
const EXAMPLE_COMPLETIONS = {
style: ["casual", "formal", "technical", "friendly"],
temperature: ["0", "0.5", "0.7", "1.0"],
resourceId: ["1", "2", "3", "4", "5"],
};
const GetTinyImageSchema = z.object({});
const AnnotatedMessageSchema = z.object({
@@ -97,6 +98,30 @@ const GetResourceLinksSchema = z.object({
.describe("Number of resource links to return (1-10)"),
});
const ListRootsSchema = z.object({});
const StructuredContentSchema = {
input: z.object({
location: z
.string()
.trim()
.min(1)
.describe("City name or zip code"),
}),
output: z.object({
temperature: z
.number()
.describe("Temperature in celsius"),
conditions: z
.string()
.describe("Weather conditions description"),
humidity: z
.number()
.describe("Humidity percentage"),
})
};
enum ToolName {
ECHO = "echo",
ADD = "add",
@@ -108,6 +133,8 @@ enum ToolName {
GET_RESOURCE_REFERENCE = "getResourceReference",
ELICITATION = "startElicitation",
GET_RESOURCE_LINKS = "getResourceLinks",
STRUCTURED_CONTENT = "structuredContent",
LIST_ROOTS = "listRoots"
}
enum PromptName {
@@ -116,10 +143,18 @@ enum PromptName {
RESOURCE = "resource_prompt",
}
// Example completion values
const EXAMPLE_COMPLETIONS = {
style: ["casual", "formal", "technical", "friendly"],
temperature: ["0", "0.5", "0.7", "1.0"],
resourceId: ["1", "2", "3", "4", "5"],
};
export const createServer = () => {
const server = new Server(
{
name: "example-servers/everything",
title: "Everything Example Server",
version: "1.0.0",
},
{
@@ -128,8 +163,7 @@ export const createServer = () => {
resources: { subscribe: true },
tools: {},
logging: {},
completions: {},
elicitation: {},
completions: {}
},
instructions
}
@@ -139,59 +173,49 @@ export const createServer = () => {
let subsUpdateInterval: NodeJS.Timeout | undefined;
let stdErrUpdateInterval: NodeJS.Timeout | undefined;
// Set up update interval for subscribed resources
subsUpdateInterval = setInterval(() => {
for (const uri of subscriptions) {
server.notification({
method: "notifications/resources/updated",
params: { uri },
});
}
}, 10000);
let logLevel: LoggingLevel = "debug";
let logsUpdateInterval: NodeJS.Timeout | undefined;
const messages = [
{ level: "debug", data: "Debug-level message" },
{ level: "info", data: "Info-level message" },
{ level: "notice", data: "Notice-level message" },
{ level: "warning", data: "Warning-level message" },
{ level: "error", data: "Error-level message" },
{ level: "critical", data: "Critical-level message" },
{ level: "alert", data: "Alert level-message" },
{ level: "emergency", data: "Emergency-level message" },
];
// Store client capabilities
let clientCapabilities: ClientCapabilities | undefined;
const isMessageIgnored = (level: LoggingLevel): boolean => {
const currentLevel = messages.findIndex((msg) => logLevel === msg.level);
const messageLevel = messages.findIndex((msg) => level === msg.level);
return messageLevel < currentLevel;
// Roots state management
let currentRoots: Root[] = [];
let clientSupportsRoots = false;
let sessionId: string | undefined;
// Function to start notification intervals when a client connects
const startNotificationIntervals = (sid?: string|undefined) => {
sessionId = sid;
if (!subsUpdateInterval) {
subsUpdateInterval = setInterval(() => {
for (const uri of subscriptions) {
server.notification({
method: "notifications/resources/updated",
params: { uri },
});
}
}, 10000);
}
const maybeAppendSessionId = sessionId ? ` - SessionId ${sessionId}`: "";
const messages: { level: LoggingLevel; data: string }[] = [
{ level: "debug", data: `Debug-level message${maybeAppendSessionId}` },
{ level: "info", data: `Info-level message${maybeAppendSessionId}` },
{ level: "notice", data: `Notice-level message${maybeAppendSessionId}` },
{ level: "warning", data: `Warning-level message${maybeAppendSessionId}` },
{ level: "error", data: `Error-level message${maybeAppendSessionId}` },
{ level: "critical", data: `Critical-level message${maybeAppendSessionId}` },
{ level: "alert", data: `Alert level-message${maybeAppendSessionId}` },
{ level: "emergency", data: `Emergency-level message${maybeAppendSessionId}` },
];
if (!logsUpdateInterval) {
console.error("Starting logs update interval");
logsUpdateInterval = setInterval(async () => {
await server.sendLoggingMessage( messages[Math.floor(Math.random() * messages.length)], sessionId);
}, 15000);
}
};
// Set up update interval for random log messages
logsUpdateInterval = setInterval(() => {
let message = {
method: "notifications/message",
params: messages[Math.floor(Math.random() * messages.length)],
};
if (!isMessageIgnored(message.params.level as LoggingLevel))
server.notification(message);
}, 20000);
// Set up update interval for stderr messages
stdErrUpdateInterval = setInterval(() => {
const shortTimestamp = new Date().toLocaleTimeString([], {
hour: "2-digit",
minute: "2-digit",
second: "2-digit"
});
server.notification({
method: "notifications/stderr",
params: { content: `${shortTimestamp}: A stderr message` },
});
}, 30000);
// Helper method to request sampling from client
const requestSampling = async (
context: string,
@@ -454,18 +478,18 @@ export const createServer = () => {
description: "Adds two numbers",
inputSchema: zodToJsonSchema(AddSchema) as ToolInput,
},
{
name: ToolName.PRINT_ENV,
description:
"Prints all environment variables, helpful for debugging MCP server configuration",
inputSchema: zodToJsonSchema(PrintEnvSchema) as ToolInput,
},
{
name: ToolName.LONG_RUNNING_OPERATION,
description:
"Demonstrates a long running operation with progress updates",
inputSchema: zodToJsonSchema(LongRunningOperationSchema) as ToolInput,
},
{
name: ToolName.PRINT_ENV,
description:
"Prints all environment variables, helpful for debugging MCP server configuration",
inputSchema: zodToJsonSchema(PrintEnvSchema) as ToolInput,
},
{
name: ToolName.SAMPLE_LLM,
description: "Samples from an LLM using MCP's sampling feature",
@@ -488,23 +512,36 @@ export const createServer = () => {
"Returns a resource reference that can be used by MCP clients",
inputSchema: zodToJsonSchema(GetResourceReferenceSchema) as ToolInput,
},
{
name: ToolName.ELICITATION,
description: "Demonstrates the Elicitation feature by asking the user to provide information about their favorite color, number, and pets.",
inputSchema: zodToJsonSchema(ElicitationSchema) as ToolInput,
},
{
name: ToolName.GET_RESOURCE_LINKS,
description:
"Returns multiple resource links that reference different types of resources",
inputSchema: zodToJsonSchema(GetResourceLinksSchema) as ToolInput,
},
{
name: ToolName.STRUCTURED_CONTENT,
description:
"Returns structured content along with an output schema for client data validation",
inputSchema: zodToJsonSchema(StructuredContentSchema.input) as ToolInput,
outputSchema: zodToJsonSchema(StructuredContentSchema.output) as ToolOutput,
},
];
if (clientCapabilities!.roots) tools.push ({
name: ToolName.LIST_ROOTS,
description:
"Lists the current MCP roots provided by the client. Demonstrates the roots protocol capability even though this server doesn't access files.",
inputSchema: zodToJsonSchema(ListRootsSchema) as ToolInput,
});
if (clientCapabilities!.elicitation) tools.push ({
name: ToolName.ELICITATION,
description: "Demonstrates the Elicitation feature by asking the user to provide information about their favorite color, number, and pets.",
inputSchema: zodToJsonSchema(ElicitationSchema) as ToolInput,
});
return { tools };
});
server.setRequestHandler(CallToolRequestSchema, async (request) => {
server.setRequestHandler(CallToolRequestSchema, async (request,extra) => {
const { name, arguments: args } = request.params;
if (name === ToolName.ECHO) {
@@ -546,7 +583,7 @@ export const createServer = () => {
total: steps,
progressToken,
},
});
},{relatedRequestId: extra.requestId});
}
}
@@ -608,35 +645,6 @@ export const createServer = () => {
};
}
if (name === ToolName.GET_RESOURCE_REFERENCE) {
const validatedArgs = GetResourceReferenceSchema.parse(args);
const resourceId = validatedArgs.resourceId;
const resourceIndex = resourceId - 1;
if (resourceIndex < 0 || resourceIndex >= ALL_RESOURCES.length) {
throw new Error(`Resource with ID ${resourceId} does not exist`);
}
const resource = ALL_RESOURCES[resourceIndex];
return {
content: [
{
type: "text",
text: `Returning resource reference for Resource ${resourceId}:`,
},
{
type: "resource",
resource: resource,
},
{
type: "text",
text: `You can access this resource using the URI: ${resource.uri}`,
},
],
};
}
if (name === ToolName.ANNOTATED_MESSAGE) {
const { messageType, includeImage } = AnnotatedMessageSchema.parse(args);
@@ -688,6 +696,35 @@ export const createServer = () => {
return { content };
}
if (name === ToolName.GET_RESOURCE_REFERENCE) {
const validatedArgs = GetResourceReferenceSchema.parse(args);
const resourceId = validatedArgs.resourceId;
const resourceIndex = resourceId - 1;
if (resourceIndex < 0 || resourceIndex >= ALL_RESOURCES.length) {
throw new Error(`Resource with ID ${resourceId} does not exist`);
}
const resource = ALL_RESOURCES[resourceIndex];
return {
content: [
{
type: "text",
text: `Returning resource reference for Resource ${resourceId}:`,
},
{
type: "resource",
resource: resource,
},
{
type: "text",
text: `You can access this resource using the URI: ${resource.uri}`,
},
],
};
}
if (name === ToolName.ELICITATION) {
ElicitationSchema.parse(args);
@@ -698,10 +735,10 @@ export const createServer = () => {
properties: {
color: { type: 'string', description: 'Favorite color' },
number: { type: 'integer', description: 'Favorite number', minimum: 1, maximum: 100 },
pets: {
type: 'string',
enum: ['cats', 'dogs', 'birds', 'fish', 'reptiles'],
description: 'Favorite pets'
pets: {
type: 'string',
enum: ['cats', 'dogs', 'birds', 'fish', 'reptiles'],
description: 'Favorite pets'
},
}
}
@@ -709,13 +746,13 @@ export const createServer = () => {
// Handle different response actions
const content = [];
if (elicitationResult.action === 'accept' && elicitationResult.content) {
content.push({
type: "text",
text: `✅ User provided their favorite things!`,
});
// Only access elicitationResult.content when action is accept
const { color, number, pets } = elicitationResult.content;
content.push({
@@ -733,7 +770,7 @@ export const createServer = () => {
text: `⚠️ User cancelled the elicitation dialog.`,
});
}
// Include raw result for debugging
content.push({
type: "text",
@@ -742,7 +779,7 @@ export const createServer = () => {
return { content };
}
if (name === ToolName.GET_RESOURCE_LINKS) {
const { count } = GetResourceLinksSchema.parse(args);
const content = [];
@@ -761,11 +798,10 @@ export const createServer = () => {
type: "resource_link",
uri: resource.uri,
name: resource.name,
description: `Resource ${i + 1}: ${
resource.mimeType === "text/plain"
? "plaintext resource"
: "binary blob resource"
}`,
description: `Resource ${i + 1}: ${resource.mimeType === "text/plain"
? "plaintext resource"
: "binary blob resource"
}`,
mimeType: resource.mimeType,
});
}
@@ -773,6 +809,73 @@ export const createServer = () => {
return { content };
}
if (name === ToolName.STRUCTURED_CONTENT) {
// The same response is returned for every input.
const validatedArgs = StructuredContentSchema.input.parse(args);
const weather = {
temperature: 22.5,
conditions: "Partly cloudy",
humidity: 65
}
const backwardCompatiblecontent = {
type: "text",
text: JSON.stringify(weather)
}
return {
content: [backwardCompatiblecontent],
structuredContent: weather
};
}
if (name === ToolName.LIST_ROOTS) {
ListRootsSchema.parse(args);
if (!clientSupportsRoots) {
return {
content: [
{
type: "text",
text: "The MCP client does not support the roots protocol.\n\n" +
"This means the server cannot access information about the client's workspace directories or file system roots."
}
]
};
}
if (currentRoots.length === 0) {
return {
content: [
{
type: "text",
text: "The client supports roots but no roots are currently configured.\n\n" +
"This could mean:\n" +
"1. The client hasn't provided any roots yet\n" +
"2. The client provided an empty roots list\n" +
"3. The roots configuration is still being loaded"
}
]
};
}
const rootsList = currentRoots.map((root, index) => {
return `${index + 1}. ${root.name || 'Unnamed Root'}\n URI: ${root.uri}`;
}).join('\n\n');
return {
content: [
{
type: "text",
text: `Current MCP Roots (${currentRoots.length} total):\n\n${rootsList}\n\n` +
"Note: This server demonstrates the roots protocol capability but doesn't actually access files. " +
"The roots are provided by the MCP client and can be used by servers that need file system access."
}
]
};
}
throw new Error(`Unknown tool: ${name}`);
});
@@ -805,30 +908,76 @@ export const createServer = () => {
throw new Error(`Unknown reference type`);
});
server.setRequestHandler(SetLevelRequestSchema, async (request) => {
const { level } = request.params;
logLevel = level;
// Roots protocol handlers
server.setNotificationHandler(RootsListChangedNotificationSchema, async () => {
try {
// Request the updated roots list from the client
const response = await server.listRoots();
if (response && 'roots' in response) {
currentRoots = response.roots;
// Demonstrate different log levels
await server.notification({
method: "notifications/message",
params: {
level: "debug",
logger: "test-server",
data: `Logging level set to: ${logLevel}`,
},
});
return {};
// Log the roots update for demonstration
await server.sendLoggingMessage({
level: "info",
logger: "everything-server",
data: `Roots updated: ${currentRoots.length} root(s) received from client`,
}, sessionId);
}
} catch (error) {
await server.sendLoggingMessage({
level: "error",
logger: "everything-server",
data: `Failed to request roots from client: ${error instanceof Error ? error.message : String(error)}`,
}, sessionId);
}
});
// Handle post-initialization setup for roots
server.oninitialized = async () => {
clientCapabilities = server.getClientCapabilities();
if (clientCapabilities?.roots) {
clientSupportsRoots = true;
try {
const response = await server.listRoots();
if (response && 'roots' in response) {
currentRoots = response.roots;
await server.sendLoggingMessage({
level: "info",
logger: "everything-server",
data: `Initial roots received: ${currentRoots.length} root(s) from client`,
}, sessionId);
} else {
await server.sendLoggingMessage({
level: "warning",
logger: "everything-server",
data: "Client returned no roots set",
}, sessionId);
}
} catch (error) {
await server.sendLoggingMessage({
level: "error",
logger: "everything-server",
data: `Failed to request initial roots from client: ${error instanceof Error ? error.message : String(error)}`,
}, sessionId);
}
} else {
await server.sendLoggingMessage({
level: "info",
logger: "everything-server",
data: "Client does not support MCP roots protocol",
}, sessionId);
}
};
const cleanup = async () => {
if (subsUpdateInterval) clearInterval(subsUpdateInterval);
if (logsUpdateInterval) clearInterval(logsUpdateInterval);
if (stdErrUpdateInterval) clearInterval(stdErrUpdateInterval);
};
return { server, cleanup };
return { server, cleanup, startNotificationIntervals };
};
const MCP_TINY_IMAGE =

View File

@@ -1,13 +1,23 @@
Testing and demonstration server for MCP protocol features.
Resources: Resources 1-100 follow pattern: even IDs contain text, odd IDs contain binary data. Resources paginated at 10 items per page with cursor-based navigation.
## Resources
Key dependencies: Progress notifications require `_meta.progressToken` in tool calls. Resource subscriptions generate updates every 10 seconds.
Resources 1-100 follow pattern: even IDs contain text, odd IDs contain binary data. Resources are paginated at 10 items per page with cursor-based navigation.
Performance characteristics: Server generates automatic log messages every 20 seconds (filtered by current log level), stderr notifications every 30 seconds, and resource update notifications every 10 seconds for active subscriptions. Log level changes affect message filtering in real-time.
## Key dependencies
Multi-modal testing: `complex_prompt` includes both text arguments and image content for testing client multi-modal handling. `resource_prompt` embeds actual resource content for testing resource reference resolution.
Progress notifications require `_meta.progressToken` in tool calls. Resource subscriptions generate updates every 10 seconds.
Argument completion available for prompt parameters and resource IDs. Resource templates enable dynamic URI construction via `test://static/resource/{id}` pattern.
## Performance characteristics
Easter egg: If asked about server instructions, respond with "🎉 Server instructions are working! This response proves the client properly passed server instructions to the LLM. This demonstrates MCP's instructions feature in action."
Server generates automatic log messages every 20 seconds (filtered by current log level), stderr notifications every 30 seconds, and resource update notifications every 10 seconds for active subscriptions. Log level changes affect message filtering in real-time.
## Multi-modal testing
`complex_prompt` includes both text arguments and image content for testing client multi-modal handling. `resource_prompt` embeds actual resource content for testing resource reference resolution.
Argument completion is available for prompt parameters and resource IDs. Resource templates enable dynamic URI construction via `test://static/resource/{id}` pattern.
## Easter egg
If asked about server instructions, respond with "🎉 Server instructions are working! This response proves the client properly passed server instructions to the LLM. This demonstrates MCP's instructions feature in action."

View File

@@ -22,12 +22,14 @@
"start:streamableHttp": "node dist/streamableHttp.js"
},
"dependencies": {
"@modelcontextprotocol/sdk": "^1.12.0",
"@modelcontextprotocol/sdk": "^1.18.0",
"cors": "^2.8.5",
"express": "^4.21.1",
"zod": "^3.23.8",
"zod-to-json-schema": "^3.23.5"
},
"devDependencies": {
"@types/cors": "^2.8.19",
"@types/express": "^5.0.0",
"shx": "^0.3.4",
"typescript": "^5.6.2"

View File

@@ -1,16 +1,22 @@
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express from "express";
import { createServer } from "./everything.js";
import cors from 'cors';
console.error('Starting SSE server...');
const app = express();
app.use(cors({
"origin": "*", // use "*" with caution in production
"methods": "GET,POST",
"preflightContinue": false,
"optionsSuccessStatus": 204,
})); // Enable CORS for all routes so Inspector can connect
const transports: Map<string, SSEServerTransport> = new Map<string, SSEServerTransport>();
app.get("/sse", async (req, res) => {
let transport: SSEServerTransport;
const { server, cleanup } = createServer();
const { server, cleanup, startNotificationIntervals } = createServer();
if (req?.query?.sessionId) {
const sessionId = (req?.query?.sessionId as string);
@@ -25,6 +31,9 @@ app.get("/sse", async (req, res) => {
await server.connect(transport);
console.error("Client Connected: ", transport.sessionId);
// Start notification intervals after client connects
startNotificationIntervals(transport.sessionId);
// Handle close of connection
server.onclose = async () => {
console.error("Client Disconnected: ", transport.sessionId);

View File

@@ -6,17 +6,18 @@ import { createServer } from "./everything.js";
console.error('Starting default (STDIO) server...');
async function main() {
const transport = new StdioServerTransport();
const {server, cleanup} = createServer();
const transport = new StdioServerTransport();
const {server, cleanup, startNotificationIntervals} = createServer();
await server.connect(transport);
await server.connect(transport);
startNotificationIntervals();
// Cleanup on exit
process.on("SIGINT", async () => {
await cleanup();
await server.close();
process.exit(0);
});
// Cleanup on exit
process.on("SIGINT", async () => {
await cleanup();
await server.close();
process.exit(0);
});
}
main().catch((error) => {

View File

@@ -3,10 +3,22 @@ import { InMemoryEventStore } from '@modelcontextprotocol/sdk/examples/shared/in
import express, { Request, Response } from "express";
import { createServer } from "./everything.js";
import { randomUUID } from 'node:crypto';
import cors from 'cors';
console.error('Starting Streamable HTTP server...');
const app = express();
app.use(cors({
"origin": "*", // use "*" with caution in production
"methods": "GET,POST,DELETE",
"preflightContinue": false,
"optionsSuccessStatus": 204,
"exposedHeaders": [
'mcp-session-id',
'last-event-id',
'mcp-protocol-version'
]
})); // Enable CORS for all routes so Inspector can connect
const transports: Map<string, StreamableHTTPServerTransport> = new Map<string, StreamableHTTPServerTransport>();
@@ -15,6 +27,7 @@ app.post('/mcp', async (req: Request, res: Response) => {
try {
// Check for existing session ID
const sessionId = req.headers['mcp-session-id'] as string | undefined;
let transport: StreamableHTTPServerTransport;
if (sessionId && transports.has(sessionId)) {
@@ -22,7 +35,7 @@ app.post('/mcp', async (req: Request, res: Response) => {
transport = transports.get(sessionId)!;
} else if (!sessionId) {
const { server, cleanup } = createServer();
const { server, cleanup, startNotificationIntervals } = createServer();
// New initialization request
const eventStore = new InMemoryEventStore();
@@ -53,7 +66,11 @@ app.post('/mcp', async (req: Request, res: Response) => {
await server.connect(transport);
await transport.handleRequest(req, res);
return; // Already handled
// Wait until initialize is complete and transport will have a sessionId
startNotificationIntervals(transport.sessionId);
return; // Already handled
} else {
// Invalid request - no session ID or not initialization request
res.status(400).json({

View File

@@ -9,11 +9,11 @@ Node.js server implementing Model Context Protocol (MCP) for filesystem operatio
- Move files/directories
- Search files
- Get file metadata
- Dynamic directory access control via [Roots](https://modelcontextprotocol.io/docs/concepts/roots)
- Dynamic directory access control via [Roots](https://modelcontextprotocol.io/docs/learn/client-concepts#roots)
## Directory Access Control
The server uses a flexible directory access control system. Directories can be specified via command-line arguments or dynamically via [Roots](https://modelcontextprotocol.io/docs/concepts/roots).
The server uses a flexible directory access control system. Directories can be specified via command-line arguments or dynamically via [Roots](https://modelcontextprotocol.io/docs/learn/client-concepts#roots).
### Method 1: Command-line Arguments
Specify Allowed directories when starting the server:
@@ -22,7 +22,7 @@ mcp-server-filesystem /path/to/dir1 /path/to/dir2
```
### Method 2: MCP Roots (Recommended)
MCP clients that support [Roots](https://modelcontextprotocol.io/docs/concepts/roots) can dynamically update the Allowed directories.
MCP clients that support [Roots](https://modelcontextprotocol.io/docs/learn/client-concepts#roots) can dynamically update the Allowed directories.
Roots notified by Client to Server, completely replace any server-side Allowed directories when provided.
@@ -64,16 +64,22 @@ The server's directory access control follows this flow:
## API
### Resources
- `file://system`: File system operations interface
### Tools
- **read_file**
- Read complete contents of a file
- Input: `path` (string)
- Reads complete file contents with UTF-8 encoding
- **read_text_file**
- Read complete contents of a file as text
- Inputs:
- `path` (string)
- `head` (number, optional): First N lines
- `tail` (number, optional): Last N lines
- Always treats the file as UTF-8 text regardless of extension
- Cannot specify both `head` and `tail` simultaneously
- **read_media_file**
- Read an image or audio file
- Inputs:
- `path` (string)
- Streams the file and returns base64 data with the corresponding MIME type
- **read_multiple_files**
- Read multiple files simultaneously
@@ -114,6 +120,14 @@ The server's directory access control follows this flow:
- List directory contents with [FILE] or [DIR] prefixes
- Input: `path` (string)
- **list_directory_with_sizes**
- List directory contents with [FILE] or [DIR] prefixes, including file sizes
- Inputs:
- `path` (string): Directory path to list
- `sortBy` (string, optional): Sort entries by "name" or "size" (default: "name")
- Returns detailed listing with file sizes and summary statistics
- Shows total files, directories, and combined size
- **move_file**
- Move or rename files and directories
- Inputs:
@@ -122,14 +136,28 @@ The server's directory access control follows this flow:
- Fails if destination exists
- **search_files**
- Recursively search for files/directories
- Recursively search for files/directories that match or do not match patterns
- Inputs:
- `path` (string): Starting directory
- `pattern` (string): Search pattern
- `excludePatterns` (string[]): Exclude any patterns. Glob formats are supported.
- Case-insensitive matching
- `excludePatterns` (string[]): Exclude any patterns.
- Glob-style pattern matching
- Returns full paths to matches
- **directory_tree**
- Get recursive JSON tree structure of directory contents
- Inputs:
- `path` (string): Starting directory
- `excludePatterns` (string[]): Exclude any patterns. Glob formats are supported.
- Returns:
- JSON array where each entry contains:
- `name` (string): File/directory name
- `type` ('file'|'directory'): Entry type
- `children` (array): Present only for directories
- Empty array for empty directories
- Omitted for files
- Output is formatted with 2-space indentation for readability
- **get_file_info**
- Get detailed file/directory metadata
- Input: `path` (string)
@@ -201,11 +229,15 @@ For quick installation, click the installation buttons below...
[![Install with Docker in VS Code](https://img.shields.io/badge/VS_Code-Docker-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=filesystem&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-i%22%2C%22--rm%22%2C%22--mount%22%2C%22type%3Dbind%2Csrc%3D%24%7BworkspaceFolder%7D%2Cdst%3D%2Fprojects%2Fworkspace%22%2C%22mcp%2Ffilesystem%22%2C%22%2Fprojects%22%5D%7D) [![Install with Docker in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Docker-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=filesystem&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-i%22%2C%22--rm%22%2C%22--mount%22%2C%22type%3Dbind%2Csrc%3D%24%7BworkspaceFolder%7D%2Cdst%3D%2Fprojects%2Fworkspace%22%2C%22mcp%2Ffilesystem%22%2C%22%2Fprojects%22%5D%7D&quality=insiders)
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing `Ctrl + Shift + P` and typing `Preferences: Open Settings (JSON)`.
For manual installation, you can configure the MCP server using one of these methods:
Optionally, you can add it to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
**Method 1: User Configuration (Recommended)**
Add the configuration to your user-level MCP configuration file. Open the Command Palette (`Ctrl + Shift + P`) and run `MCP: Open User Configuration`. This will open your user `mcp.json` file where you can add the server configuration.
> Note that the `mcp` key is not needed in the `.vscode/mcp.json` file.
**Method 2: Workspace Configuration**
Alternatively, you can add the configuration to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
> For more details about MCP configuration in VS Code, see the [official VS Code MCP documentation](https://code.visualstudio.com/docs/copilot/mcp).
You can provide sandboxed directories to the server by mounting them to `/projects`. Adding the `ro` flag will make the directory readonly by the server.
@@ -214,19 +246,17 @@ Note: all directories must be mounted to `/projects` by default.
```json
{
"mcp": {
"servers": {
"filesystem": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--mount", "type=bind,src=${workspaceFolder},dst=/projects/workspace",
"mcp/filesystem",
"/projects"
]
}
"servers": {
"filesystem": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--mount", "type=bind,src=${workspaceFolder},dst=/projects/workspace",
"mcp/filesystem",
"/projects"
]
}
}
}
@@ -236,16 +266,14 @@ Note: all directories must be mounted to `/projects` by default.
```json
{
"mcp": {
"servers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"${workspaceFolder}"
]
}
"servers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"${workspaceFolder}"
]
}
}
}

View File

@@ -0,0 +1,147 @@
import { describe, it, expect, beforeEach, afterEach } from '@jest/globals';
import * as fs from 'fs/promises';
import * as path from 'path';
import * as os from 'os';
// We need to test the buildTree function, but it's defined inside the request handler
// So we'll extract the core logic into a testable function
import { minimatch } from 'minimatch';
interface TreeEntry {
name: string;
type: 'file' | 'directory';
children?: TreeEntry[];
}
async function buildTreeForTesting(currentPath: string, rootPath: string, excludePatterns: string[] = []): Promise<TreeEntry[]> {
const entries = await fs.readdir(currentPath, {withFileTypes: true});
const result: TreeEntry[] = [];
for (const entry of entries) {
const relativePath = path.relative(rootPath, path.join(currentPath, entry.name));
const shouldExclude = excludePatterns.some(pattern => {
if (pattern.includes('*')) {
return minimatch(relativePath, pattern, {dot: true});
}
// For files: match exact name or as part of path
// For directories: match as directory path
return minimatch(relativePath, pattern, {dot: true}) ||
minimatch(relativePath, `**/${pattern}`, {dot: true}) ||
minimatch(relativePath, `**/${pattern}/**`, {dot: true});
});
if (shouldExclude)
continue;
const entryData: TreeEntry = {
name: entry.name,
type: entry.isDirectory() ? 'directory' : 'file'
};
if (entry.isDirectory()) {
const subPath = path.join(currentPath, entry.name);
entryData.children = await buildTreeForTesting(subPath, rootPath, excludePatterns);
}
result.push(entryData);
}
return result;
}
describe('buildTree exclude patterns', () => {
let testDir: string;
beforeEach(async () => {
testDir = await fs.mkdtemp(path.join(os.tmpdir(), 'filesystem-test-'));
// Create test directory structure
await fs.mkdir(path.join(testDir, 'src'));
await fs.mkdir(path.join(testDir, 'node_modules'));
await fs.mkdir(path.join(testDir, '.git'));
await fs.mkdir(path.join(testDir, 'nested', 'node_modules'), { recursive: true });
// Create test files
await fs.writeFile(path.join(testDir, '.env'), 'SECRET=value');
await fs.writeFile(path.join(testDir, '.env.local'), 'LOCAL_SECRET=value');
await fs.writeFile(path.join(testDir, 'src', 'index.js'), 'console.log("hello");');
await fs.writeFile(path.join(testDir, 'package.json'), '{}');
await fs.writeFile(path.join(testDir, 'node_modules', 'module.js'), 'module.exports = {};');
await fs.writeFile(path.join(testDir, 'nested', 'node_modules', 'deep.js'), 'module.exports = {};');
});
afterEach(async () => {
await fs.rm(testDir, { recursive: true, force: true });
});
it('should exclude files matching simple patterns', async () => {
// Test the current implementation - this will fail until the bug is fixed
const tree = await buildTreeForTesting(testDir, testDir, ['.env']);
const fileNames = tree.map(entry => entry.name);
expect(fileNames).not.toContain('.env');
expect(fileNames).toContain('.env.local'); // Should not exclude this
expect(fileNames).toContain('src');
expect(fileNames).toContain('package.json');
});
it('should exclude directories matching simple patterns', async () => {
const tree = await buildTreeForTesting(testDir, testDir, ['node_modules']);
const dirNames = tree.map(entry => entry.name);
expect(dirNames).not.toContain('node_modules');
expect(dirNames).toContain('src');
expect(dirNames).toContain('.git');
});
it('should exclude nested directories with same pattern', async () => {
const tree = await buildTreeForTesting(testDir, testDir, ['node_modules']);
// Find the nested directory
const nestedDir = tree.find(entry => entry.name === 'nested');
expect(nestedDir).toBeDefined();
expect(nestedDir!.children).toBeDefined();
// The nested/node_modules should also be excluded
const nestedChildren = nestedDir!.children!.map(child => child.name);
expect(nestedChildren).not.toContain('node_modules');
});
it('should handle glob patterns correctly', async () => {
const tree = await buildTreeForTesting(testDir, testDir, ['*.env']);
const fileNames = tree.map(entry => entry.name);
expect(fileNames).not.toContain('.env');
expect(fileNames).toContain('.env.local'); // *.env should not match .env.local
expect(fileNames).toContain('src');
});
it('should handle dot files correctly', async () => {
const tree = await buildTreeForTesting(testDir, testDir, ['.git']);
const dirNames = tree.map(entry => entry.name);
expect(dirNames).not.toContain('.git');
expect(dirNames).toContain('.env'); // Should not exclude this
});
it('should work with multiple exclude patterns', async () => {
const tree = await buildTreeForTesting(testDir, testDir, ['node_modules', '.env', '.git']);
const entryNames = tree.map(entry => entry.name);
expect(entryNames).not.toContain('node_modules');
expect(entryNames).not.toContain('.env');
expect(entryNames).not.toContain('.git');
expect(entryNames).toContain('src');
expect(entryNames).toContain('package.json');
});
it('should handle empty exclude patterns', async () => {
const tree = await buildTreeForTesting(testDir, testDir, []);
const entryNames = tree.map(entry => entry.name);
// All entries should be included
expect(entryNames).toContain('node_modules');
expect(entryNames).toContain('.env');
expect(entryNames).toContain('.git');
expect(entryNames).toContain('src');
});
});

View File

@@ -0,0 +1,701 @@
import { describe, it, expect, beforeEach, afterEach, jest } from '@jest/globals';
import fs from 'fs/promises';
import path from 'path';
import os from 'os';
import {
// Pure utility functions
formatSize,
normalizeLineEndings,
createUnifiedDiff,
// Security & validation functions
validatePath,
setAllowedDirectories,
// File operations
getFileStats,
readFileContent,
writeFileContent,
// Search & filtering functions
searchFilesWithValidation,
// File editing functions
applyFileEdits,
tailFile,
headFile
} from '../lib.js';
// Mock fs module
jest.mock('fs/promises');
const mockFs = fs as jest.Mocked<typeof fs>;
describe('Lib Functions', () => {
beforeEach(() => {
jest.clearAllMocks();
// Set up allowed directories for tests
const allowedDirs = process.platform === 'win32' ? ['C:\\Users\\test', 'C:\\temp', 'C:\\allowed'] : ['/home/user', '/tmp', '/allowed'];
setAllowedDirectories(allowedDirs);
});
afterEach(() => {
jest.restoreAllMocks();
// Clear allowed directories after tests
setAllowedDirectories([]);
});
describe('Pure Utility Functions', () => {
describe('formatSize', () => {
it('formats bytes correctly', () => {
expect(formatSize(0)).toBe('0 B');
expect(formatSize(512)).toBe('512 B');
expect(formatSize(1024)).toBe('1.00 KB');
expect(formatSize(1536)).toBe('1.50 KB');
expect(formatSize(1048576)).toBe('1.00 MB');
expect(formatSize(1073741824)).toBe('1.00 GB');
expect(formatSize(1099511627776)).toBe('1.00 TB');
});
it('handles edge cases', () => {
expect(formatSize(1023)).toBe('1023 B');
expect(formatSize(1025)).toBe('1.00 KB');
expect(formatSize(1048575)).toBe('1024.00 KB');
});
it('handles very large numbers beyond TB', () => {
// The function only supports up to TB, so very large numbers will show as TB
expect(formatSize(1024 * 1024 * 1024 * 1024 * 1024)).toBe('1024.00 TB');
expect(formatSize(Number.MAX_SAFE_INTEGER)).toContain('TB');
});
it('handles negative numbers', () => {
// Negative numbers will result in NaN for the log calculation
expect(formatSize(-1024)).toContain('NaN');
expect(formatSize(-0)).toBe('0 B');
});
it('handles decimal numbers', () => {
expect(formatSize(1536.5)).toBe('1.50 KB');
expect(formatSize(1023.9)).toBe('1023.9 B');
});
it('handles very small positive numbers', () => {
expect(formatSize(1)).toBe('1 B');
expect(formatSize(0.5)).toBe('0.5 B');
expect(formatSize(0.1)).toBe('0.1 B');
});
});
describe('normalizeLineEndings', () => {
it('converts CRLF to LF', () => {
expect(normalizeLineEndings('line1\r\nline2\r\nline3')).toBe('line1\nline2\nline3');
});
it('leaves LF unchanged', () => {
expect(normalizeLineEndings('line1\nline2\nline3')).toBe('line1\nline2\nline3');
});
it('handles mixed line endings', () => {
expect(normalizeLineEndings('line1\r\nline2\nline3\r\n')).toBe('line1\nline2\nline3\n');
});
it('handles empty string', () => {
expect(normalizeLineEndings('')).toBe('');
});
});
describe('createUnifiedDiff', () => {
it('creates diff for simple changes', () => {
const original = 'line1\nline2\nline3';
const modified = 'line1\nmodified line2\nline3';
const diff = createUnifiedDiff(original, modified, 'test.txt');
expect(diff).toContain('--- test.txt');
expect(diff).toContain('+++ test.txt');
expect(diff).toContain('-line2');
expect(diff).toContain('+modified line2');
});
it('handles CRLF normalization', () => {
const original = 'line1\r\nline2\r\n';
const modified = 'line1\nmodified line2\n';
const diff = createUnifiedDiff(original, modified);
expect(diff).toContain('-line2');
expect(diff).toContain('+modified line2');
});
it('handles identical content', () => {
const content = 'line1\nline2\nline3';
const diff = createUnifiedDiff(content, content);
// Should not contain any +/- lines for identical content (excluding header lines)
expect(diff.split('\n').filter((line: string) => line.startsWith('+++') || line.startsWith('---'))).toHaveLength(2);
expect(diff.split('\n').filter((line: string) => line.startsWith('+') && !line.startsWith('+++'))).toHaveLength(0);
expect(diff.split('\n').filter((line: string) => line.startsWith('-') && !line.startsWith('---'))).toHaveLength(0);
});
it('handles empty content', () => {
const diff = createUnifiedDiff('', '');
expect(diff).toContain('--- file');
expect(diff).toContain('+++ file');
});
it('handles default filename parameter', () => {
const diff = createUnifiedDiff('old', 'new');
expect(diff).toContain('--- file');
expect(diff).toContain('+++ file');
});
it('handles custom filename', () => {
const diff = createUnifiedDiff('old', 'new', 'custom.txt');
expect(diff).toContain('--- custom.txt');
expect(diff).toContain('+++ custom.txt');
});
});
});
describe('Security & Validation Functions', () => {
describe('validatePath', () => {
// Use Windows-compatible paths for testing
const allowedDirs = process.platform === 'win32' ? ['C:\\Users\\test', 'C:\\temp'] : ['/home/user', '/tmp'];
beforeEach(() => {
mockFs.realpath.mockImplementation(async (path: any) => path.toString());
});
it('validates allowed paths', async () => {
const testPath = process.platform === 'win32' ? 'C:\\Users\\test\\file.txt' : '/home/user/file.txt';
const result = await validatePath(testPath);
expect(result).toBe(testPath);
});
it('rejects disallowed paths', async () => {
const testPath = process.platform === 'win32' ? 'C:\\Windows\\System32\\file.txt' : '/etc/passwd';
await expect(validatePath(testPath))
.rejects.toThrow('Access denied - path outside allowed directories');
});
it('handles non-existent files by checking parent directory', async () => {
const newFilePath = process.platform === 'win32' ? 'C:\\Users\\test\\newfile.txt' : '/home/user/newfile.txt';
const parentPath = process.platform === 'win32' ? 'C:\\Users\\test' : '/home/user';
// Create an error with the ENOENT code that the implementation checks for
const enoentError = new Error('ENOENT') as NodeJS.ErrnoException;
enoentError.code = 'ENOENT';
mockFs.realpath
.mockRejectedValueOnce(enoentError)
.mockResolvedValueOnce(parentPath);
const result = await validatePath(newFilePath);
expect(result).toBe(path.resolve(newFilePath));
});
it('rejects when parent directory does not exist', async () => {
const newFilePath = process.platform === 'win32' ? 'C:\\Users\\test\\nonexistent\\newfile.txt' : '/home/user/nonexistent/newfile.txt';
// Create errors with the ENOENT code
const enoentError1 = new Error('ENOENT') as NodeJS.ErrnoException;
enoentError1.code = 'ENOENT';
const enoentError2 = new Error('ENOENT') as NodeJS.ErrnoException;
enoentError2.code = 'ENOENT';
mockFs.realpath
.mockRejectedValueOnce(enoentError1)
.mockRejectedValueOnce(enoentError2);
await expect(validatePath(newFilePath))
.rejects.toThrow('Parent directory does not exist');
});
});
});
describe('File Operations', () => {
describe('getFileStats', () => {
it('returns file statistics', async () => {
const mockStats = {
size: 1024,
birthtime: new Date('2023-01-01'),
mtime: new Date('2023-01-02'),
atime: new Date('2023-01-03'),
isDirectory: () => false,
isFile: () => true,
mode: 0o644
};
mockFs.stat.mockResolvedValueOnce(mockStats as any);
const result = await getFileStats('/test/file.txt');
expect(result).toEqual({
size: 1024,
created: new Date('2023-01-01'),
modified: new Date('2023-01-02'),
accessed: new Date('2023-01-03'),
isDirectory: false,
isFile: true,
permissions: '644'
});
});
it('handles directory statistics', async () => {
const mockStats = {
size: 4096,
birthtime: new Date('2023-01-01'),
mtime: new Date('2023-01-02'),
atime: new Date('2023-01-03'),
isDirectory: () => true,
isFile: () => false,
mode: 0o755
};
mockFs.stat.mockResolvedValueOnce(mockStats as any);
const result = await getFileStats('/test/dir');
expect(result.isDirectory).toBe(true);
expect(result.isFile).toBe(false);
expect(result.permissions).toBe('755');
});
});
describe('readFileContent', () => {
it('reads file with default encoding', async () => {
mockFs.readFile.mockResolvedValueOnce('file content');
const result = await readFileContent('/test/file.txt');
expect(result).toBe('file content');
expect(mockFs.readFile).toHaveBeenCalledWith('/test/file.txt', 'utf-8');
});
it('reads file with custom encoding', async () => {
mockFs.readFile.mockResolvedValueOnce('file content');
const result = await readFileContent('/test/file.txt', 'ascii');
expect(result).toBe('file content');
expect(mockFs.readFile).toHaveBeenCalledWith('/test/file.txt', 'ascii');
});
});
describe('writeFileContent', () => {
it('writes file content', async () => {
mockFs.writeFile.mockResolvedValueOnce(undefined);
await writeFileContent('/test/file.txt', 'new content');
expect(mockFs.writeFile).toHaveBeenCalledWith('/test/file.txt', 'new content', { encoding: "utf-8", flag: 'wx' });
});
});
});
describe('Search & Filtering Functions', () => {
describe('searchFilesWithValidation', () => {
beforeEach(() => {
mockFs.realpath.mockImplementation(async (path: any) => path.toString());
});
it('excludes files matching exclude patterns', async () => {
const mockEntries = [
{ name: 'test.txt', isDirectory: () => false },
{ name: 'test.log', isDirectory: () => false },
{ name: 'node_modules', isDirectory: () => true }
];
mockFs.readdir.mockResolvedValueOnce(mockEntries as any);
const testDir = process.platform === 'win32' ? 'C:\\allowed\\dir' : '/allowed/dir';
const allowedDirs = process.platform === 'win32' ? ['C:\\allowed'] : ['/allowed'];
// Mock realpath to return the same path for validation to pass
mockFs.realpath.mockImplementation(async (inputPath: any) => {
const pathStr = inputPath.toString();
// Return the path as-is for validation
return pathStr;
});
const result = await searchFilesWithValidation(
testDir,
'*test*',
allowedDirs,
{ excludePatterns: ['*.log', 'node_modules'] }
);
const expectedResult = process.platform === 'win32' ? 'C:\\allowed\\dir\\test.txt' : '/allowed/dir/test.txt';
expect(result).toEqual([expectedResult]);
});
it('handles validation errors during search', async () => {
const mockEntries = [
{ name: 'test.txt', isDirectory: () => false },
{ name: 'invalid_file.txt', isDirectory: () => false }
];
mockFs.readdir.mockResolvedValueOnce(mockEntries as any);
// Mock validatePath to throw error for invalid_file.txt
mockFs.realpath.mockImplementation(async (path: any) => {
if (path.toString().includes('invalid_file.txt')) {
throw new Error('Access denied');
}
return path.toString();
});
const testDir = process.platform === 'win32' ? 'C:\\allowed\\dir' : '/allowed/dir';
const allowedDirs = process.platform === 'win32' ? ['C:\\allowed'] : ['/allowed'];
const result = await searchFilesWithValidation(
testDir,
'*test*',
allowedDirs,
{}
);
// Should only return the valid file, skipping the invalid one
const expectedResult = process.platform === 'win32' ? 'C:\\allowed\\dir\\test.txt' : '/allowed/dir/test.txt';
expect(result).toEqual([expectedResult]);
});
it('handles complex exclude patterns with wildcards', async () => {
const mockEntries = [
{ name: 'test.txt', isDirectory: () => false },
{ name: 'test.backup', isDirectory: () => false },
{ name: 'important_test.js', isDirectory: () => false }
];
mockFs.readdir.mockResolvedValueOnce(mockEntries as any);
const testDir = process.platform === 'win32' ? 'C:\\allowed\\dir' : '/allowed/dir';
const allowedDirs = process.platform === 'win32' ? ['C:\\allowed'] : ['/allowed'];
const result = await searchFilesWithValidation(
testDir,
'*test*',
allowedDirs,
{ excludePatterns: ['*.backup'] }
);
const expectedResults = process.platform === 'win32' ? [
'C:\\allowed\\dir\\test.txt',
'C:\\allowed\\dir\\important_test.js'
] : [
'/allowed/dir/test.txt',
'/allowed/dir/important_test.js'
];
expect(result).toEqual(expectedResults);
});
});
});
describe('File Editing Functions', () => {
describe('applyFileEdits', () => {
beforeEach(() => {
mockFs.readFile.mockResolvedValue('line1\nline2\nline3\n');
mockFs.writeFile.mockResolvedValue(undefined);
});
it('applies simple text replacement', async () => {
const edits = [
{ oldText: 'line2', newText: 'modified line2' }
];
mockFs.rename.mockResolvedValueOnce(undefined);
const result = await applyFileEdits('/test/file.txt', edits, false);
expect(result).toContain('modified line2');
// Should write to temporary file then rename
expect(mockFs.writeFile).toHaveBeenCalledWith(
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
'line1\nmodified line2\nline3\n',
'utf-8'
);
expect(mockFs.rename).toHaveBeenCalledWith(
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
'/test/file.txt'
);
});
it('handles dry run mode', async () => {
const edits = [
{ oldText: 'line2', newText: 'modified line2' }
];
const result = await applyFileEdits('/test/file.txt', edits, true);
expect(result).toContain('modified line2');
expect(mockFs.writeFile).not.toHaveBeenCalled();
});
it('applies multiple edits sequentially', async () => {
const edits = [
{ oldText: 'line1', newText: 'first line' },
{ oldText: 'line3', newText: 'third line' }
];
mockFs.rename.mockResolvedValueOnce(undefined);
await applyFileEdits('/test/file.txt', edits, false);
expect(mockFs.writeFile).toHaveBeenCalledWith(
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
'first line\nline2\nthird line\n',
'utf-8'
);
expect(mockFs.rename).toHaveBeenCalledWith(
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
'/test/file.txt'
);
});
it('handles whitespace-flexible matching', async () => {
mockFs.readFile.mockResolvedValue(' line1\n line2\n line3\n');
const edits = [
{ oldText: 'line2', newText: 'modified line2' }
];
mockFs.rename.mockResolvedValueOnce(undefined);
await applyFileEdits('/test/file.txt', edits, false);
expect(mockFs.writeFile).toHaveBeenCalledWith(
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
' line1\n modified line2\n line3\n',
'utf-8'
);
expect(mockFs.rename).toHaveBeenCalledWith(
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
'/test/file.txt'
);
});
it('throws error for non-matching edits', async () => {
const edits = [
{ oldText: 'nonexistent line', newText: 'replacement' }
];
await expect(applyFileEdits('/test/file.txt', edits, false))
.rejects.toThrow('Could not find exact match for edit');
});
it('handles complex multi-line edits with indentation', async () => {
mockFs.readFile.mockResolvedValue('function test() {\n console.log("hello");\n return true;\n}');
const edits = [
{
oldText: ' console.log("hello");\n return true;',
newText: ' console.log("world");\n console.log("test");\n return false;'
}
];
mockFs.rename.mockResolvedValueOnce(undefined);
await applyFileEdits('/test/file.js', edits, false);
expect(mockFs.writeFile).toHaveBeenCalledWith(
expect.stringMatching(/\/test\/file\.js\.[a-f0-9]+\.tmp$/),
'function test() {\n console.log("world");\n console.log("test");\n return false;\n}',
'utf-8'
);
expect(mockFs.rename).toHaveBeenCalledWith(
expect.stringMatching(/\/test\/file\.js\.[a-f0-9]+\.tmp$/),
'/test/file.js'
);
});
it('handles edits with different indentation patterns', async () => {
mockFs.readFile.mockResolvedValue(' if (condition) {\n doSomething();\n }');
const edits = [
{
oldText: 'doSomething();',
newText: 'doSomethingElse();\n doAnotherThing();'
}
];
mockFs.rename.mockResolvedValueOnce(undefined);
await applyFileEdits('/test/file.js', edits, false);
expect(mockFs.writeFile).toHaveBeenCalledWith(
expect.stringMatching(/\/test\/file\.js\.[a-f0-9]+\.tmp$/),
' if (condition) {\n doSomethingElse();\n doAnotherThing();\n }',
'utf-8'
);
expect(mockFs.rename).toHaveBeenCalledWith(
expect.stringMatching(/\/test\/file\.js\.[a-f0-9]+\.tmp$/),
'/test/file.js'
);
});
it('handles CRLF line endings in file content', async () => {
mockFs.readFile.mockResolvedValue('line1\r\nline2\r\nline3\r\n');
const edits = [
{ oldText: 'line2', newText: 'modified line2' }
];
mockFs.rename.mockResolvedValueOnce(undefined);
await applyFileEdits('/test/file.txt', edits, false);
expect(mockFs.writeFile).toHaveBeenCalledWith(
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
'line1\nmodified line2\nline3\n',
'utf-8'
);
expect(mockFs.rename).toHaveBeenCalledWith(
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
'/test/file.txt'
);
});
});
describe('tailFile', () => {
it('handles empty files', async () => {
mockFs.stat.mockResolvedValue({ size: 0 } as any);
const result = await tailFile('/test/empty.txt', 5);
expect(result).toBe('');
expect(mockFs.open).not.toHaveBeenCalled();
});
it('calls stat to check file size', async () => {
mockFs.stat.mockResolvedValue({ size: 100 } as any);
// Mock file handle with proper typing
const mockFileHandle = {
read: jest.fn(),
close: jest.fn()
} as any;
mockFileHandle.read.mockResolvedValue({ bytesRead: 0 });
mockFileHandle.close.mockResolvedValue(undefined);
mockFs.open.mockResolvedValue(mockFileHandle);
await tailFile('/test/file.txt', 2);
expect(mockFs.stat).toHaveBeenCalledWith('/test/file.txt');
expect(mockFs.open).toHaveBeenCalledWith('/test/file.txt', 'r');
});
it('handles files with content and returns last lines', async () => {
mockFs.stat.mockResolvedValue({ size: 50 } as any);
const mockFileHandle = {
read: jest.fn(),
close: jest.fn()
} as any;
// Simulate reading file content in chunks
mockFileHandle.read
.mockResolvedValueOnce({ bytesRead: 20, buffer: Buffer.from('line3\nline4\nline5\n') })
.mockResolvedValueOnce({ bytesRead: 0 });
mockFileHandle.close.mockResolvedValue(undefined);
mockFs.open.mockResolvedValue(mockFileHandle);
const result = await tailFile('/test/file.txt', 2);
expect(mockFileHandle.close).toHaveBeenCalled();
});
it('handles read errors gracefully', async () => {
mockFs.stat.mockResolvedValue({ size: 100 } as any);
const mockFileHandle = {
read: jest.fn(),
close: jest.fn()
} as any;
mockFileHandle.read.mockResolvedValue({ bytesRead: 0 });
mockFileHandle.close.mockResolvedValue(undefined);
mockFs.open.mockResolvedValue(mockFileHandle);
await tailFile('/test/file.txt', 5);
expect(mockFileHandle.close).toHaveBeenCalled();
});
});
describe('headFile', () => {
it('opens file for reading', async () => {
// Mock file handle with proper typing
const mockFileHandle = {
read: jest.fn(),
close: jest.fn()
} as any;
mockFileHandle.read.mockResolvedValue({ bytesRead: 0 });
mockFileHandle.close.mockResolvedValue(undefined);
mockFs.open.mockResolvedValue(mockFileHandle);
await headFile('/test/file.txt', 2);
expect(mockFs.open).toHaveBeenCalledWith('/test/file.txt', 'r');
});
it('handles files with content and returns first lines', async () => {
const mockFileHandle = {
read: jest.fn(),
close: jest.fn()
} as any;
// Simulate reading file content with newlines
mockFileHandle.read
.mockResolvedValueOnce({ bytesRead: 20, buffer: Buffer.from('line1\nline2\nline3\n') })
.mockResolvedValueOnce({ bytesRead: 0 });
mockFileHandle.close.mockResolvedValue(undefined);
mockFs.open.mockResolvedValue(mockFileHandle);
const result = await headFile('/test/file.txt', 2);
expect(mockFileHandle.close).toHaveBeenCalled();
});
it('handles files with leftover content', async () => {
const mockFileHandle = {
read: jest.fn(),
close: jest.fn()
} as any;
// Simulate reading file content without final newline
mockFileHandle.read
.mockResolvedValueOnce({ bytesRead: 15, buffer: Buffer.from('line1\nline2\nend') })
.mockResolvedValueOnce({ bytesRead: 0 });
mockFileHandle.close.mockResolvedValue(undefined);
mockFs.open.mockResolvedValue(mockFileHandle);
const result = await headFile('/test/file.txt', 5);
expect(mockFileHandle.close).toHaveBeenCalled();
});
it('handles reaching requested line count', async () => {
const mockFileHandle = {
read: jest.fn(),
close: jest.fn()
} as any;
// Simulate reading exactly the requested number of lines
mockFileHandle.read
.mockResolvedValueOnce({ bytesRead: 12, buffer: Buffer.from('line1\nline2\n') })
.mockResolvedValueOnce({ bytesRead: 0 });
mockFileHandle.close.mockResolvedValue(undefined);
mockFs.open.mockResolvedValue(mockFileHandle);
const result = await headFile('/test/file.txt', 2);
expect(mockFileHandle.close).toHaveBeenCalled();
});
});
});
});

View File

@@ -162,6 +162,12 @@ describe('Path Utilities', () => {
expect(result).not.toContain('~');
});
it('expands bare ~ to home directory', () => {
const result = expandHome('~');
expect(result).not.toContain('~');
expect(result.length).toBeGreaterThan(0);
});
it('leaves other paths unchanged', () => {
expect(expandHome('C:/test')).toBe('C:/test');
});

View File

@@ -4,6 +4,49 @@ import * as fs from 'fs/promises';
import * as os from 'os';
import { isPathWithinAllowedDirectories } from '../path-validation.js';
/**
* Check if the current environment supports symlink creation
*/
async function checkSymlinkSupport(): Promise<boolean> {
const testDir = await fs.mkdtemp(path.join(os.tmpdir(), 'symlink-test-'));
try {
const targetFile = path.join(testDir, 'target.txt');
const linkFile = path.join(testDir, 'link.txt');
await fs.writeFile(targetFile, 'test');
await fs.symlink(targetFile, linkFile);
// If we get here, symlinks are supported
return true;
} catch (error) {
// EPERM indicates no symlink permissions
if ((error as NodeJS.ErrnoException).code === 'EPERM') {
return false;
}
// Other errors might indicate a real problem
throw error;
} finally {
await fs.rm(testDir, { recursive: true, force: true });
}
}
// Global variable to store symlink support status
let symlinkSupported: boolean | null = null;
/**
* Get cached symlink support status, checking once per test run
*/
async function getSymlinkSupport(): Promise<boolean> {
if (symlinkSupported === null) {
symlinkSupported = await checkSymlinkSupport();
if (!symlinkSupported) {
console.log('\n⚠ Symlink tests will be skipped - symlink creation not supported in this environment');
console.log(' On Windows, enable Developer Mode or run as Administrator to enable symlink tests');
}
}
return symlinkSupported;
}
describe('Path Validation', () => {
it('allows exact directory match', () => {
const allowed = ['/home/user/project'];
@@ -587,6 +630,12 @@ describe('Path Validation', () => {
});
it('demonstrates symlink race condition allows writing outside allowed directories', async () => {
const symlinkSupported = await getSymlinkSupport();
if (!symlinkSupported) {
console.log(' ⏭️ Skipping symlink race condition test - symlinks not supported');
return;
}
const allowed = [allowedDir];
await expect(fs.access(testPath)).rejects.toThrow();
@@ -603,6 +652,12 @@ describe('Path Validation', () => {
});
it('shows timing differences between validation approaches', async () => {
const symlinkSupported = await getSymlinkSupport();
if (!symlinkSupported) {
console.log(' ⏭️ Skipping timing validation test - symlinks not supported');
return;
}
const allowed = [allowedDir];
const validation1 = isPathWithinAllowedDirectories(testPath, allowed);
@@ -618,6 +673,12 @@ describe('Path Validation', () => {
});
it('validates directory creation timing', async () => {
const symlinkSupported = await getSymlinkSupport();
if (!symlinkSupported) {
console.log(' ⏭️ Skipping directory creation timing test - symlinks not supported');
return;
}
const allowed = [allowedDir];
const testDir = path.join(allowedDir, 'newdir');
@@ -632,6 +693,12 @@ describe('Path Validation', () => {
});
it('demonstrates exclusive file creation behavior', async () => {
const symlinkSupported = await getSymlinkSupport();
if (!symlinkSupported) {
console.log(' ⏭️ Skipping exclusive file creation test - symlinks not supported');
return;
}
const allowed = [allowedDir];
await fs.symlink(targetFile, testPath);
@@ -644,6 +711,12 @@ describe('Path Validation', () => {
});
it('should use resolved parent paths for non-existent files', async () => {
const symlinkSupported = await getSymlinkSupport();
if (!symlinkSupported) {
console.log(' ⏭️ Skipping resolved parent paths test - symlinks not supported');
return;
}
const allowed = [allowedDir];
const symlinkDir = path.join(allowedDir, 'link');
@@ -662,6 +735,12 @@ describe('Path Validation', () => {
});
it('demonstrates parent directory symlink traversal', async () => {
const symlinkSupported = await getSymlinkSupport();
if (!symlinkSupported) {
console.log(' ⏭️ Skipping parent directory symlink traversal test - symlinks not supported');
return;
}
const allowed = [allowedDir];
const deepPath = path.join(allowedDir, 'sub1', 'sub2', 'file.txt');
@@ -682,6 +761,12 @@ describe('Path Validation', () => {
});
it('should prevent race condition between validatePath and file operation', async () => {
const symlinkSupported = await getSymlinkSupport();
if (!symlinkSupported) {
console.log(' ⏭️ Skipping race condition prevention test - symlinks not supported');
return;
}
const allowed = [allowedDir];
const racePath = path.join(allowedDir, 'race-file.txt');
const targetFile = path.join(forbiddenDir, 'target.txt');
@@ -730,6 +815,12 @@ describe('Path Validation', () => {
});
it('should handle symlinks that point within allowed directories', async () => {
const symlinkSupported = await getSymlinkSupport();
if (!symlinkSupported) {
console.log(' ⏭️ Skipping symlinks within allowed directories test - symlinks not supported');
return;
}
const allowed = [allowedDir];
const targetFile = path.join(allowedDir, 'target.txt');
const symlinkPath = path.join(allowedDir, 'symlink.txt');
@@ -756,6 +847,12 @@ describe('Path Validation', () => {
});
it('should prevent overwriting files through symlinks pointing outside allowed directories', async () => {
const symlinkSupported = await getSymlinkSupport();
if (!symlinkSupported) {
console.log(' ⏭️ Skipping symlink overwrite prevention test - symlinks not supported');
return;
}
const allowed = [allowedDir];
const legitFile = path.join(allowedDir, 'existing.txt');
const targetFile = path.join(forbiddenDir, 'target.txt');
@@ -786,6 +883,12 @@ describe('Path Validation', () => {
});
it('demonstrates race condition in read operations', async () => {
const symlinkSupported = await getSymlinkSupport();
if (!symlinkSupported) {
console.log(' ⏭️ Skipping race condition in read operations test - symlinks not supported');
return;
}
const allowed = [allowedDir];
const legitFile = path.join(allowedDir, 'readable.txt');
const secretFile = path.join(forbiddenDir, 'secret.txt');
@@ -812,6 +915,12 @@ describe('Path Validation', () => {
});
it('verifies rename does not follow symlinks', async () => {
const symlinkSupported = await getSymlinkSupport();
if (!symlinkSupported) {
console.log(' ⏭️ Skipping rename symlink test - symlinks not supported');
return;
}
const allowed = [allowedDir];
const tempFile = path.join(allowedDir, 'temp.txt');
const targetSymlink = path.join(allowedDir, 'target-symlink.txt');

View File

@@ -10,15 +10,26 @@ import {
type Root,
} from "@modelcontextprotocol/sdk/types.js";
import fs from "fs/promises";
import { createReadStream } from "fs";
import path from "path";
import os from 'os';
import { randomBytes } from 'crypto';
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { diffLines, createTwoFilesPatch } from 'diff';
import { minimatch } from 'minimatch';
import { isPathWithinAllowedDirectories } from './path-validation.js';
import { minimatch } from "minimatch";
import { normalizePath, expandHome } from './path-utils.js';
import { getValidRootDirectories } from './roots-utils.js';
import {
// Function imports
formatSize,
validatePath,
getFileStats,
readFileContent,
writeFileContent,
searchFilesWithValidation,
applyFileEdits,
tailFile,
headFile,
setAllowedDirectories,
} from './lib.js';
// Command line argument parsing
const args = process.argv.slice(2);
@@ -30,25 +41,14 @@ if (args.length === 0) {
console.error("At least one directory must be provided by EITHER method for the server to operate.");
}
// Normalize all paths consistently
function normalizePath(p: string): string {
return path.normalize(p);
}
function expandHome(filepath: string): string {
if (filepath.startsWith('~/') || filepath === '~') {
return path.join(os.homedir(), filepath.slice(1));
}
return filepath;
}
// Store allowed directories in normalized and resolved form
let allowedDirectories = await Promise.all(
args.map(async (dir) => {
const expanded = expandHome(dir);
const absolute = path.resolve(expanded);
try {
// Resolve symlinks in allowed directories during startup
// Security: Resolve symlinks in allowed directories during startup
// This ensures we know the real paths and can validate against them later
const resolved = await fs.realpath(absolute);
return normalizePath(resolved);
} catch (error) {
@@ -60,9 +60,9 @@ let allowedDirectories = await Promise.all(
);
// Validate that all directories exist and are accessible
await Promise.all(args.map(async (dir) => {
await Promise.all(allowedDirectories.map(async (dir) => {
try {
const stats = await fs.stat(expandHome(dir));
const stats = await fs.stat(dir);
if (!stats.isDirectory()) {
console.error(`Error: ${dir} is not a directory`);
process.exit(1);
@@ -73,57 +73,25 @@ await Promise.all(args.map(async (dir) => {
}
}));
// Security utilities
async function validatePath(requestedPath: string): Promise<string> {
const expandedPath = expandHome(requestedPath);
const absolute = path.isAbsolute(expandedPath)
? path.resolve(expandedPath)
: path.resolve(process.cwd(), expandedPath);
const normalizedRequested = normalizePath(absolute);
// Check if path is within allowed directories
const isAllowed = isPathWithinAllowedDirectories(normalizedRequested, allowedDirectories);
if (!isAllowed) {
throw new Error(`Access denied - path outside allowed directories: ${absolute} not in ${allowedDirectories.join(', ')}`);
}
// Handle symlinks by checking their real path
try {
const realPath = await fs.realpath(absolute);
const normalizedReal = normalizePath(realPath);
if (!isPathWithinAllowedDirectories(normalizedReal, allowedDirectories)) {
throw new Error(`Access denied - symlink target outside allowed directories: ${realPath} not in ${allowedDirectories.join(', ')}`);
}
return realPath;
} catch (error) {
// For new files that don't exist yet, verify parent directory
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
const parentDir = path.dirname(absolute);
try {
const realParentPath = await fs.realpath(parentDir);
const normalizedParent = normalizePath(realParentPath);
if (!isPathWithinAllowedDirectories(normalizedParent, allowedDirectories)) {
throw new Error(`Access denied - parent directory outside allowed directories: ${realParentPath} not in ${allowedDirectories.join(', ')}`);
}
return absolute;
} catch {
throw new Error(`Parent directory does not exist: ${parentDir}`);
}
}
throw error;
}
}
// Initialize the global allowedDirectories in lib.ts
setAllowedDirectories(allowedDirectories);
// Schema definitions
const ReadFileArgsSchema = z.object({
const ReadTextFileArgsSchema = z.object({
path: z.string(),
tail: z.number().optional().describe('If provided, returns only the last N lines of the file'),
head: z.number().optional().describe('If provided, returns only the first N lines of the file')
});
const ReadMediaFileArgsSchema = z.object({
path: z.string()
});
const ReadMultipleFilesArgsSchema = z.object({
paths: z.array(z.string()),
paths: z
.array(z.string())
.min(1, "At least one file path must be provided")
.describe("Array of file paths to read. Each path must be a string pointing to a valid file within allowed directories."),
});
const WriteFileArgsSchema = z.object({
@@ -157,6 +125,7 @@ const ListDirectoryWithSizesArgsSchema = z.object({
const DirectoryTreeArgsSchema = z.object({
path: z.string(),
excludePatterns: z.array(z.string()).optional().default([])
});
const MoveFileArgsSchema = z.object({
@@ -177,16 +146,6 @@ const GetFileInfoArgsSchema = z.object({
const ToolInputSchema = ToolSchema.shape.inputSchema;
type ToolInput = z.infer<typeof ToolInputSchema>;
interface FileInfo {
size: number;
created: Date;
modified: Date;
accessed: Date;
isDirectory: boolean;
isFile: boolean;
permissions: string;
}
// Server setup
const server = new Server(
{
@@ -200,275 +159,22 @@ const server = new Server(
},
);
// Tool implementations
async function getFileStats(filePath: string): Promise<FileInfo> {
const stats = await fs.stat(filePath);
return {
size: stats.size,
created: stats.birthtime,
modified: stats.mtime,
accessed: stats.atime,
isDirectory: stats.isDirectory(),
isFile: stats.isFile(),
permissions: stats.mode.toString(8).slice(-3),
};
}
async function searchFiles(
rootPath: string,
pattern: string,
excludePatterns: string[] = []
): Promise<string[]> {
const results: string[] = [];
async function search(currentPath: string) {
const entries = await fs.readdir(currentPath, { withFileTypes: true });
for (const entry of entries) {
const fullPath = path.join(currentPath, entry.name);
try {
// Validate each path before processing
await validatePath(fullPath);
// Check if path matches any exclude pattern
const relativePath = path.relative(rootPath, fullPath);
const shouldExclude = excludePatterns.some(pattern => {
const globPattern = pattern.includes('*') ? pattern : `**/${pattern}/**`;
return minimatch(relativePath, globPattern, { dot: true });
});
if (shouldExclude) {
continue;
}
if (entry.name.toLowerCase().includes(pattern.toLowerCase())) {
results.push(fullPath);
}
if (entry.isDirectory()) {
await search(fullPath);
}
} catch (error) {
// Skip invalid paths during search
continue;
}
}
}
await search(rootPath);
return results;
}
// file editing and diffing utilities
function normalizeLineEndings(text: string): string {
return text.replace(/\r\n/g, '\n');
}
function createUnifiedDiff(originalContent: string, newContent: string, filepath: string = 'file'): string {
// Ensure consistent line endings for diff
const normalizedOriginal = normalizeLineEndings(originalContent);
const normalizedNew = normalizeLineEndings(newContent);
return createTwoFilesPatch(
filepath,
filepath,
normalizedOriginal,
normalizedNew,
'original',
'modified'
);
}
async function applyFileEdits(
filePath: string,
edits: Array<{oldText: string, newText: string}>,
dryRun = false
): Promise<string> {
// Read file content and normalize line endings
const content = normalizeLineEndings(await fs.readFile(filePath, 'utf-8'));
// Apply edits sequentially
let modifiedContent = content;
for (const edit of edits) {
const normalizedOld = normalizeLineEndings(edit.oldText);
const normalizedNew = normalizeLineEndings(edit.newText);
// If exact match exists, use it
if (modifiedContent.includes(normalizedOld)) {
modifiedContent = modifiedContent.replace(normalizedOld, normalizedNew);
continue;
}
// Otherwise, try line-by-line matching with flexibility for whitespace
const oldLines = normalizedOld.split('\n');
const contentLines = modifiedContent.split('\n');
let matchFound = false;
for (let i = 0; i <= contentLines.length - oldLines.length; i++) {
const potentialMatch = contentLines.slice(i, i + oldLines.length);
// Compare lines with normalized whitespace
const isMatch = oldLines.every((oldLine, j) => {
const contentLine = potentialMatch[j];
return oldLine.trim() === contentLine.trim();
});
if (isMatch) {
// Preserve original indentation of first line
const originalIndent = contentLines[i].match(/^\s*/)?.[0] || '';
const newLines = normalizedNew.split('\n').map((line, j) => {
if (j === 0) return originalIndent + line.trimStart();
// For subsequent lines, try to preserve relative indentation
const oldIndent = oldLines[j]?.match(/^\s*/)?.[0] || '';
const newIndent = line.match(/^\s*/)?.[0] || '';
if (oldIndent && newIndent) {
const relativeIndent = newIndent.length - oldIndent.length;
return originalIndent + ' '.repeat(Math.max(0, relativeIndent)) + line.trimStart();
}
return line;
});
contentLines.splice(i, oldLines.length, ...newLines);
modifiedContent = contentLines.join('\n');
matchFound = true;
break;
}
}
if (!matchFound) {
throw new Error(`Could not find exact match for edit:\n${edit.oldText}`);
}
}
// Create unified diff
const diff = createUnifiedDiff(content, modifiedContent, filePath);
// Format diff with appropriate number of backticks
let numBackticks = 3;
while (diff.includes('`'.repeat(numBackticks))) {
numBackticks++;
}
const formattedDiff = `${'`'.repeat(numBackticks)}diff\n${diff}${'`'.repeat(numBackticks)}\n\n`;
if (!dryRun) {
// Security: Use atomic rename to prevent race conditions where symlinks
// could be created between validation and write. Rename operations
// replace the target file atomically and don't follow symlinks.
const tempPath = `${filePath}.${randomBytes(16).toString('hex')}.tmp`;
try {
await fs.writeFile(tempPath, modifiedContent, 'utf-8');
await fs.rename(tempPath, filePath);
} catch (error) {
try {
await fs.unlink(tempPath);
} catch {}
throw error;
}
}
return formattedDiff;
}
// Helper functions
function formatSize(bytes: number): string {
const units = ['B', 'KB', 'MB', 'GB', 'TB'];
if (bytes === 0) return '0 B';
const i = Math.floor(Math.log(bytes) / Math.log(1024));
if (i === 0) return `${bytes} ${units[i]}`;
return `${(bytes / Math.pow(1024, i)).toFixed(2)} ${units[i]}`;
}
// Memory-efficient implementation to get the last N lines of a file
async function tailFile(filePath: string, numLines: number): Promise<string> {
const CHUNK_SIZE = 1024; // Read 1KB at a time
const stats = await fs.stat(filePath);
const fileSize = stats.size;
if (fileSize === 0) return '';
// Open file for reading
const fileHandle = await fs.open(filePath, 'r');
try {
const lines: string[] = [];
let position = fileSize;
let chunk = Buffer.alloc(CHUNK_SIZE);
let linesFound = 0;
let remainingText = '';
// Read chunks from the end of the file until we have enough lines
while (position > 0 && linesFound < numLines) {
const size = Math.min(CHUNK_SIZE, position);
position -= size;
const { bytesRead } = await fileHandle.read(chunk, 0, size, position);
if (!bytesRead) break;
// Get the chunk as a string and prepend any remaining text from previous iteration
const readData = chunk.slice(0, bytesRead).toString('utf-8');
const chunkText = readData + remainingText;
// Split by newlines and count
const chunkLines = normalizeLineEndings(chunkText).split('\n');
// If this isn't the end of the file, the first line is likely incomplete
// Save it to prepend to the next chunk
if (position > 0) {
remainingText = chunkLines[0];
chunkLines.shift(); // Remove the first (incomplete) line
}
// Add lines to our result (up to the number we need)
for (let i = chunkLines.length - 1; i >= 0 && linesFound < numLines; i--) {
lines.unshift(chunkLines[i]);
linesFound++;
}
}
return lines.join('\n');
} finally {
await fileHandle.close();
}
}
// New function to get the first N lines of a file
async function headFile(filePath: string, numLines: number): Promise<string> {
const fileHandle = await fs.open(filePath, 'r');
try {
const lines: string[] = [];
let buffer = '';
let bytesRead = 0;
const chunk = Buffer.alloc(1024); // 1KB buffer
// Read chunks and count lines until we have enough or reach EOF
while (lines.length < numLines) {
const result = await fileHandle.read(chunk, 0, chunk.length, bytesRead);
if (result.bytesRead === 0) break; // End of file
bytesRead += result.bytesRead;
buffer += chunk.slice(0, result.bytesRead).toString('utf-8');
const newLineIndex = buffer.lastIndexOf('\n');
if (newLineIndex !== -1) {
const completeLines = buffer.slice(0, newLineIndex).split('\n');
buffer = buffer.slice(newLineIndex + 1);
for (const line of completeLines) {
lines.push(line);
if (lines.length >= numLines) break;
}
}
}
// If there is leftover content and we still need lines, add it
if (buffer.length > 0 && lines.length < numLines) {
lines.push(buffer);
}
return lines.join('\n');
} finally {
await fileHandle.close();
}
// Reads a file as a stream of buffers, concatenates them, and then encodes
// the result to a Base64 string. This is a memory-efficient way to handle
// binary data from a stream before the final encoding.
async function readFileAsBase64Stream(filePath: string): Promise<string> {
return new Promise((resolve, reject) => {
const stream = createReadStream(filePath);
const chunks: Buffer[] = [];
stream.on('data', (chunk) => {
chunks.push(chunk as Buffer);
});
stream.on('end', () => {
const finalBuffer = Buffer.concat(chunks);
resolve(finalBuffer.toString('base64'));
});
stream.on('error', (err) => reject(err));
});
}
// Tool handlers
@@ -477,14 +183,27 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
tools: [
{
name: "read_file",
description: "Read the complete contents of a file as text. DEPRECATED: Use read_text_file instead.",
inputSchema: zodToJsonSchema(ReadTextFileArgsSchema) as ToolInput,
},
{
name: "read_text_file",
description:
"Read the complete contents of a file from the file system. " +
"Read the complete contents of a file from the file system as text. " +
"Handles various text encodings and provides detailed error messages " +
"if the file cannot be read. Use this tool when you need to examine " +
"the contents of a single file. Use the 'head' parameter to read only " +
"the first N lines of a file, or the 'tail' parameter to read only " +
"the last N lines of a file. Only works within allowed directories.",
inputSchema: zodToJsonSchema(ReadFileArgsSchema) as ToolInput,
"the last N lines of a file. Operates on the file as text regardless of extension. " +
"Only works within allowed directories.",
inputSchema: zodToJsonSchema(ReadTextFileArgsSchema) as ToolInput,
},
{
name: "read_media_file",
description:
"Read an image or audio file. Returns the base64 encoded data and MIME type. " +
"Only works within allowed directories.",
inputSchema: zodToJsonSchema(ReadMediaFileArgsSchema) as ToolInput,
},
{
name: "read_multiple_files",
@@ -561,9 +280,9 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
name: "search_files",
description:
"Recursively search for files and directories matching a pattern. " +
"Searches through all subdirectories from the starting path. The search " +
"is case-insensitive and matches partial names. Returns full paths to all " +
"matching items. Great for finding files when you don't know their exact location. " +
"The patterns should be glob-style patterns that match paths relative to the working directory. " +
"Use pattern like '*.ext' to match files in current directory, and '**/*.ext' to match files in all subdirectories. " +
"Returns full paths to all matching items. Great for finding files when you don't know their exact location. " +
"Only searches within allowed directories.",
inputSchema: zodToJsonSchema(SearchFilesArgsSchema) as ToolInput,
},
@@ -579,8 +298,10 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
{
name: "list_allowed_directories",
description:
"Returns the list of root directories that this server is allowed to access. " +
"Use this to understand which directories are available before trying to access files. ",
"Returns the list of directories that this server is allowed to access. " +
"Subdirectories within these allowed directories are also accessible. " +
"Use this to understand which directories and their nested paths are available " +
"before trying to access files.",
inputSchema: {
type: "object",
properties: {},
@@ -597,17 +318,18 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
switch (name) {
case "read_file": {
const parsed = ReadFileArgsSchema.safeParse(args);
case "read_file":
case "read_text_file": {
const parsed = ReadTextFileArgsSchema.safeParse(args);
if (!parsed.success) {
throw new Error(`Invalid arguments for read_file: ${parsed.error}`);
throw new Error(`Invalid arguments for read_text_file: ${parsed.error}`);
}
const validPath = await validatePath(parsed.data.path);
if (parsed.data.head && parsed.data.tail) {
throw new Error("Cannot specify both head and tail parameters simultaneously");
}
if (parsed.data.tail) {
// Use memory-efficient tail implementation for large files
const tailContent = await tailFile(validPath, parsed.data.tail);
@@ -615,7 +337,7 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
content: [{ type: "text", text: tailContent }],
};
}
if (parsed.data.head) {
// Use memory-efficient head implementation for large files
const headContent = await headFile(validPath, parsed.data.head);
@@ -623,13 +345,44 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
content: [{ type: "text", text: headContent }],
};
}
const content = await fs.readFile(validPath, "utf-8");
const content = await readFileContent(validPath);
return {
content: [{ type: "text", text: content }],
};
}
case "read_media_file": {
const parsed = ReadMediaFileArgsSchema.safeParse(args);
if (!parsed.success) {
throw new Error(`Invalid arguments for read_media_file: ${parsed.error}`);
}
const validPath = await validatePath(parsed.data.path);
const extension = path.extname(validPath).toLowerCase();
const mimeTypes: Record<string, string> = {
".png": "image/png",
".jpg": "image/jpeg",
".jpeg": "image/jpeg",
".gif": "image/gif",
".webp": "image/webp",
".bmp": "image/bmp",
".svg": "image/svg+xml",
".mp3": "audio/mpeg",
".wav": "audio/wav",
".ogg": "audio/ogg",
".flac": "audio/flac",
};
const mimeType = mimeTypes[extension] || "application/octet-stream";
const data = await readFileAsBase64Stream(validPath);
const type = mimeType.startsWith("image/")
? "image"
: mimeType.startsWith("audio/")
? "audio"
: "blob";
return {
content: [{ type, data, mimeType }],
};
}
case "read_multiple_files": {
const parsed = ReadMultipleFilesArgsSchema.safeParse(args);
if (!parsed.success) {
@@ -639,7 +392,7 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
parsed.data.paths.map(async (filePath: string) => {
try {
const validPath = await validatePath(filePath);
const content = await fs.readFile(validPath, "utf-8");
const content = await readFileContent(validPath);
return `${filePath}:\n${content}\n`;
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
@@ -658,31 +411,7 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
throw new Error(`Invalid arguments for write_file: ${parsed.error}`);
}
const validPath = await validatePath(parsed.data.path);
try {
// Security: 'wx' flag ensures exclusive creation - fails if file/symlink exists,
// preventing writes through pre-existing symlinks
await fs.writeFile(validPath, parsed.data.content, { encoding: "utf-8", flag: 'wx' });
} catch (error) {
if ((error as NodeJS.ErrnoException).code === 'EEXIST') {
// Security: Use atomic rename to prevent race conditions where symlinks
// could be created between validation and write. Rename operations
// replace the target file atomically and don't follow symlinks.
const tempPath = `${validPath}.${randomBytes(16).toString('hex')}.tmp`;
try {
await fs.writeFile(tempPath, parsed.data.content, 'utf-8');
await fs.rename(tempPath, validPath);
} catch (renameError) {
try {
await fs.unlink(tempPath);
} catch {}
throw renameError;
}
} else {
throw error;
}
}
await writeFileContent(validPath, parsed.data.content);
return {
content: [{ type: "text", text: `Successfully wrote to ${parsed.data.path}` }],
};
@@ -734,7 +463,7 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
}
const validPath = await validatePath(parsed.data.path);
const entries = await fs.readdir(validPath, { withFileTypes: true });
// Get detailed information for each entry
const detailedEntries = await Promise.all(
entries.map(async (entry) => {
@@ -757,7 +486,7 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
}
})
);
// Sort entries based on sortBy parameter
const sortedEntries = [...detailedEntries].sort((a, b) => {
if (parsed.data.sortBy === 'size') {
@@ -766,29 +495,29 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
// Default sort by name
return a.name.localeCompare(b.name);
});
// Format the output
const formattedEntries = sortedEntries.map(entry =>
const formattedEntries = sortedEntries.map(entry =>
`${entry.isDirectory ? "[DIR]" : "[FILE]"} ${entry.name.padEnd(30)} ${
entry.isDirectory ? "" : formatSize(entry.size).padStart(10)
}`
);
// Add summary
const totalFiles = detailedEntries.filter(e => !e.isDirectory).length;
const totalDirs = detailedEntries.filter(e => e.isDirectory).length;
const totalSize = detailedEntries.reduce((sum, entry) => sum + (entry.isDirectory ? 0 : entry.size), 0);
const summary = [
"",
`Total: ${totalFiles} files, ${totalDirs} directories`,
`Combined size: ${formatSize(totalSize)}`
];
return {
content: [{
type: "text",
text: [...formattedEntries, ...summary].join("\n")
content: [{
type: "text",
text: [...formattedEntries, ...summary].join("\n")
}],
};
}
@@ -799,43 +528,58 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
throw new Error(`Invalid arguments for directory_tree: ${parsed.error}`);
}
interface TreeEntry {
name: string;
type: 'file' | 'directory';
children?: TreeEntry[];
}
interface TreeEntry {
name: string;
type: 'file' | 'directory';
children?: TreeEntry[];
}
const rootPath = parsed.data.path;
async function buildTree(currentPath: string): Promise<TreeEntry[]> {
const validPath = await validatePath(currentPath);
const entries = await fs.readdir(validPath, {withFileTypes: true});
const result: TreeEntry[] = [];
async function buildTree(currentPath: string, excludePatterns: string[] = []): Promise<TreeEntry[]> {
const validPath = await validatePath(currentPath);
const entries = await fs.readdir(validPath, {withFileTypes: true});
const result: TreeEntry[] = [];
for (const entry of entries) {
const entryData: TreeEntry = {
name: entry.name,
type: entry.isDirectory() ? 'directory' : 'file'
};
if (entry.isDirectory()) {
const subPath = path.join(currentPath, entry.name);
entryData.children = await buildTree(subPath);
for (const entry of entries) {
const relativePath = path.relative(rootPath, path.join(currentPath, entry.name));
const shouldExclude = excludePatterns.some(pattern => {
if (pattern.includes('*')) {
return minimatch(relativePath, pattern, {dot: true});
}
// For files: match exact name or as part of path
// For directories: match as directory path
return minimatch(relativePath, pattern, {dot: true}) ||
minimatch(relativePath, `**/${pattern}`, {dot: true}) ||
minimatch(relativePath, `**/${pattern}/**`, {dot: true});
});
if (shouldExclude)
continue;
result.push(entryData);
const entryData: TreeEntry = {
name: entry.name,
type: entry.isDirectory() ? 'directory' : 'file'
};
if (entry.isDirectory()) {
const subPath = path.join(currentPath, entry.name);
entryData.children = await buildTree(subPath, excludePatterns);
}
return result;
result.push(entryData);
}
const treeData = await buildTree(parsed.data.path);
return {
content: [{
type: "text",
text: JSON.stringify(treeData, null, 2)
}],
};
return result;
}
const treeData = await buildTree(rootPath, parsed.data.excludePatterns);
return {
content: [{
type: "text",
text: JSON.stringify(treeData, null, 2)
}],
};
}
case "move_file": {
const parsed = MoveFileArgsSchema.safeParse(args);
if (!parsed.success) {
@@ -855,7 +599,7 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
throw new Error(`Invalid arguments for search_files: ${parsed.error}`);
}
const validPath = await validatePath(parsed.data.path);
const results = await searchFiles(validPath, parsed.data.pattern, parsed.data.excludePatterns);
const results = await searchFilesWithValidation(validPath, parsed.data.pattern, allowedDirectories, { excludePatterns: parsed.data.excludePatterns });
return {
content: [{ type: "text", text: results.length > 0 ? results.join("\n") : "No matches found" }],
};
@@ -901,6 +645,7 @@ async function updateAllowedDirectoriesFromRoots(requestedRoots: Root[]) {
const validatedRootDirs = await getValidRootDirectories(requestedRoots);
if (validatedRootDirs.length > 0) {
allowedDirectories = [...validatedRootDirs];
setAllowedDirectories(allowedDirectories); // Update the global state in lib.ts
console.error(`Updated allowed directories from MCP roots: ${validatedRootDirs.length} valid directories`);
} else {
console.error("No valid root directories provided by client");

392
src/filesystem/lib.ts Normal file
View File

@@ -0,0 +1,392 @@
import fs from "fs/promises";
import path from "path";
import os from 'os';
import { randomBytes } from 'crypto';
import { diffLines, createTwoFilesPatch } from 'diff';
import { minimatch } from 'minimatch';
import { normalizePath, expandHome } from './path-utils.js';
import { isPathWithinAllowedDirectories } from './path-validation.js';
// Global allowed directories - set by the main module
let allowedDirectories: string[] = [];
// Function to set allowed directories from the main module
export function setAllowedDirectories(directories: string[]): void {
allowedDirectories = [...directories];
}
// Function to get current allowed directories
export function getAllowedDirectories(): string[] {
return [...allowedDirectories];
}
// Type definitions
interface FileInfo {
size: number;
created: Date;
modified: Date;
accessed: Date;
isDirectory: boolean;
isFile: boolean;
permissions: string;
}
export interface SearchOptions {
excludePatterns?: string[];
}
export interface SearchResult {
path: string;
isDirectory: boolean;
}
// Pure Utility Functions
export function formatSize(bytes: number): string {
const units = ['B', 'KB', 'MB', 'GB', 'TB'];
if (bytes === 0) return '0 B';
const i = Math.floor(Math.log(bytes) / Math.log(1024));
if (i < 0 || i === 0) return `${bytes} ${units[0]}`;
const unitIndex = Math.min(i, units.length - 1);
return `${(bytes / Math.pow(1024, unitIndex)).toFixed(2)} ${units[unitIndex]}`;
}
export function normalizeLineEndings(text: string): string {
return text.replace(/\r\n/g, '\n');
}
export function createUnifiedDiff(originalContent: string, newContent: string, filepath: string = 'file'): string {
// Ensure consistent line endings for diff
const normalizedOriginal = normalizeLineEndings(originalContent);
const normalizedNew = normalizeLineEndings(newContent);
return createTwoFilesPatch(
filepath,
filepath,
normalizedOriginal,
normalizedNew,
'original',
'modified'
);
}
// Security & Validation Functions
export async function validatePath(requestedPath: string): Promise<string> {
const expandedPath = expandHome(requestedPath);
const absolute = path.isAbsolute(expandedPath)
? path.resolve(expandedPath)
: path.resolve(process.cwd(), expandedPath);
const normalizedRequested = normalizePath(absolute);
// Security: Check if path is within allowed directories before any file operations
const isAllowed = isPathWithinAllowedDirectories(normalizedRequested, allowedDirectories);
if (!isAllowed) {
throw new Error(`Access denied - path outside allowed directories: ${absolute} not in ${allowedDirectories.join(', ')}`);
}
// Security: Handle symlinks by checking their real path to prevent symlink attacks
// This prevents attackers from creating symlinks that point outside allowed directories
try {
const realPath = await fs.realpath(absolute);
const normalizedReal = normalizePath(realPath);
if (!isPathWithinAllowedDirectories(normalizedReal, allowedDirectories)) {
throw new Error(`Access denied - symlink target outside allowed directories: ${realPath} not in ${allowedDirectories.join(', ')}`);
}
return realPath;
} catch (error) {
// Security: For new files that don't exist yet, verify parent directory
// This ensures we can't create files in unauthorized locations
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
const parentDir = path.dirname(absolute);
try {
const realParentPath = await fs.realpath(parentDir);
const normalizedParent = normalizePath(realParentPath);
if (!isPathWithinAllowedDirectories(normalizedParent, allowedDirectories)) {
throw new Error(`Access denied - parent directory outside allowed directories: ${realParentPath} not in ${allowedDirectories.join(', ')}`);
}
return absolute;
} catch {
throw new Error(`Parent directory does not exist: ${parentDir}`);
}
}
throw error;
}
}
// File Operations
export async function getFileStats(filePath: string): Promise<FileInfo> {
const stats = await fs.stat(filePath);
return {
size: stats.size,
created: stats.birthtime,
modified: stats.mtime,
accessed: stats.atime,
isDirectory: stats.isDirectory(),
isFile: stats.isFile(),
permissions: stats.mode.toString(8).slice(-3),
};
}
export async function readFileContent(filePath: string, encoding: string = 'utf-8'): Promise<string> {
return await fs.readFile(filePath, encoding as BufferEncoding);
}
export async function writeFileContent(filePath: string, content: string): Promise<void> {
try {
// Security: 'wx' flag ensures exclusive creation - fails if file/symlink exists,
// preventing writes through pre-existing symlinks
await fs.writeFile(filePath, content, { encoding: "utf-8", flag: 'wx' });
} catch (error) {
if ((error as NodeJS.ErrnoException).code === 'EEXIST') {
// Security: Use atomic rename to prevent race conditions where symlinks
// could be created between validation and write. Rename operations
// replace the target file atomically and don't follow symlinks.
const tempPath = `${filePath}.${randomBytes(16).toString('hex')}.tmp`;
try {
await fs.writeFile(tempPath, content, 'utf-8');
await fs.rename(tempPath, filePath);
} catch (renameError) {
try {
await fs.unlink(tempPath);
} catch {}
throw renameError;
}
} else {
throw error;
}
}
}
// File Editing Functions
interface FileEdit {
oldText: string;
newText: string;
}
export async function applyFileEdits(
filePath: string,
edits: FileEdit[],
dryRun: boolean = false
): Promise<string> {
// Read file content and normalize line endings
const content = normalizeLineEndings(await fs.readFile(filePath, 'utf-8'));
// Apply edits sequentially
let modifiedContent = content;
for (const edit of edits) {
const normalizedOld = normalizeLineEndings(edit.oldText);
const normalizedNew = normalizeLineEndings(edit.newText);
// If exact match exists, use it
if (modifiedContent.includes(normalizedOld)) {
modifiedContent = modifiedContent.replace(normalizedOld, normalizedNew);
continue;
}
// Otherwise, try line-by-line matching with flexibility for whitespace
const oldLines = normalizedOld.split('\n');
const contentLines = modifiedContent.split('\n');
let matchFound = false;
for (let i = 0; i <= contentLines.length - oldLines.length; i++) {
const potentialMatch = contentLines.slice(i, i + oldLines.length);
// Compare lines with normalized whitespace
const isMatch = oldLines.every((oldLine, j) => {
const contentLine = potentialMatch[j];
return oldLine.trim() === contentLine.trim();
});
if (isMatch) {
// Preserve original indentation of first line
const originalIndent = contentLines[i].match(/^\s*/)?.[0] || '';
const newLines = normalizedNew.split('\n').map((line, j) => {
if (j === 0) return originalIndent + line.trimStart();
// For subsequent lines, try to preserve relative indentation
const oldIndent = oldLines[j]?.match(/^\s*/)?.[0] || '';
const newIndent = line.match(/^\s*/)?.[0] || '';
if (oldIndent && newIndent) {
const relativeIndent = newIndent.length - oldIndent.length;
return originalIndent + ' '.repeat(Math.max(0, relativeIndent)) + line.trimStart();
}
return line;
});
contentLines.splice(i, oldLines.length, ...newLines);
modifiedContent = contentLines.join('\n');
matchFound = true;
break;
}
}
if (!matchFound) {
throw new Error(`Could not find exact match for edit:\n${edit.oldText}`);
}
}
// Create unified diff
const diff = createUnifiedDiff(content, modifiedContent, filePath);
// Format diff with appropriate number of backticks
let numBackticks = 3;
while (diff.includes('`'.repeat(numBackticks))) {
numBackticks++;
}
const formattedDiff = `${'`'.repeat(numBackticks)}diff\n${diff}${'`'.repeat(numBackticks)}\n\n`;
if (!dryRun) {
// Security: Use atomic rename to prevent race conditions where symlinks
// could be created between validation and write. Rename operations
// replace the target file atomically and don't follow symlinks.
const tempPath = `${filePath}.${randomBytes(16).toString('hex')}.tmp`;
try {
await fs.writeFile(tempPath, modifiedContent, 'utf-8');
await fs.rename(tempPath, filePath);
} catch (error) {
try {
await fs.unlink(tempPath);
} catch {}
throw error;
}
}
return formattedDiff;
}
// Memory-efficient implementation to get the last N lines of a file
export async function tailFile(filePath: string, numLines: number): Promise<string> {
const CHUNK_SIZE = 1024; // Read 1KB at a time
const stats = await fs.stat(filePath);
const fileSize = stats.size;
if (fileSize === 0) return '';
// Open file for reading
const fileHandle = await fs.open(filePath, 'r');
try {
const lines: string[] = [];
let position = fileSize;
let chunk = Buffer.alloc(CHUNK_SIZE);
let linesFound = 0;
let remainingText = '';
// Read chunks from the end of the file until we have enough lines
while (position > 0 && linesFound < numLines) {
const size = Math.min(CHUNK_SIZE, position);
position -= size;
const { bytesRead } = await fileHandle.read(chunk, 0, size, position);
if (!bytesRead) break;
// Get the chunk as a string and prepend any remaining text from previous iteration
const readData = chunk.slice(0, bytesRead).toString('utf-8');
const chunkText = readData + remainingText;
// Split by newlines and count
const chunkLines = normalizeLineEndings(chunkText).split('\n');
// If this isn't the end of the file, the first line is likely incomplete
// Save it to prepend to the next chunk
if (position > 0) {
remainingText = chunkLines[0];
chunkLines.shift(); // Remove the first (incomplete) line
}
// Add lines to our result (up to the number we need)
for (let i = chunkLines.length - 1; i >= 0 && linesFound < numLines; i--) {
lines.unshift(chunkLines[i]);
linesFound++;
}
}
return lines.join('\n');
} finally {
await fileHandle.close();
}
}
// New function to get the first N lines of a file
export async function headFile(filePath: string, numLines: number): Promise<string> {
const fileHandle = await fs.open(filePath, 'r');
try {
const lines: string[] = [];
let buffer = '';
let bytesRead = 0;
const chunk = Buffer.alloc(1024); // 1KB buffer
// Read chunks and count lines until we have enough or reach EOF
while (lines.length < numLines) {
const result = await fileHandle.read(chunk, 0, chunk.length, bytesRead);
if (result.bytesRead === 0) break; // End of file
bytesRead += result.bytesRead;
buffer += chunk.slice(0, result.bytesRead).toString('utf-8');
const newLineIndex = buffer.lastIndexOf('\n');
if (newLineIndex !== -1) {
const completeLines = buffer.slice(0, newLineIndex).split('\n');
buffer = buffer.slice(newLineIndex + 1);
for (const line of completeLines) {
lines.push(line);
if (lines.length >= numLines) break;
}
}
}
// If there is leftover content and we still need lines, add it
if (buffer.length > 0 && lines.length < numLines) {
lines.push(buffer);
}
return lines.join('\n');
} finally {
await fileHandle.close();
}
}
export async function searchFilesWithValidation(
rootPath: string,
pattern: string,
allowedDirectories: string[],
options: SearchOptions = {}
): Promise<string[]> {
const { excludePatterns = [] } = options;
const results: string[] = [];
async function search(currentPath: string) {
const entries = await fs.readdir(currentPath, { withFileTypes: true });
for (const entry of entries) {
const fullPath = path.join(currentPath, entry.name);
try {
await validatePath(fullPath);
const relativePath = path.relative(rootPath, fullPath);
const shouldExclude = excludePatterns.some(excludePattern =>
minimatch(relativePath, excludePattern, { dot: true })
);
if (shouldExclude) continue;
// Use glob matching for the search pattern
if (minimatch(relativePath, pattern, { dot: true })) {
results.push(fullPath);
}
if (entry.isDirectory()) {
await search(fullPath);
}
} catch {
continue;
}
}
}
await search(rootPath);
return results;
}

View File

@@ -1,6 +1,6 @@
{
"name": "@modelcontextprotocol/server-filesystem",
"version": "0.6.2",
"version": "0.6.3",
"description": "MCP server for filesystem access",
"license": "MIT",
"author": "Anthropic, PBC (https://anthropic.com)",
@@ -20,7 +20,7 @@
"test": "jest --config=jest.config.cjs --coverage"
},
"dependencies": {
"@modelcontextprotocol/sdk": "^1.12.3",
"@modelcontextprotocol/sdk": "^1.17.0",
"diff": "^5.1.0",
"glob": "^10.3.10",
"minimatch": "^10.0.1",
@@ -38,4 +38,4 @@
"ts-node": "^10.9.2",
"typescript": "^5.8.2"
}
}
}

View File

@@ -68,10 +68,19 @@ export function isPathWithinAllowedDirectories(absolutePath: string, allowedDire
}
// Special case for root directory to avoid double slash
// On Windows, we need to check if both paths are on the same drive
if (normalizedDir === path.sep) {
return normalizedPath.startsWith(path.sep);
}
// On Windows, also check for drive root (e.g., "C:\")
if (path.sep === '\\' && normalizedDir.match(/^[A-Za-z]:\\?$/)) {
// Ensure both paths are on the same drive
const dirDrive = normalizedDir.charAt(0).toLowerCase();
const pathDrive = normalizedPath.charAt(0).toLowerCase();
return pathDrive === dirDrive && normalizedPath.startsWith(normalizedDir.replace(/\\?$/, '\\'));
}
return normalizedPath.startsWith(normalizedDir + path.sep);
});
}
}

View File

@@ -24,7 +24,8 @@ RUN --mount=type=cache,target=/root/.cache/uv \
FROM python:3.12-slim-bookworm
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y git git-lfs && rm -rf /var/lib/apt/lists/* \
&& git lfs install --system
WORKDIR /app

View File

@@ -57,10 +57,12 @@ Please note that mcp-server-git is currently in early development. The functiona
- Returns: Confirmation of reset operation
8. `git_log`
- Shows the commit logs
- Shows the commit logs with optional date filtering
- Inputs:
- `repo_path` (string): Path to Git repository
- `max_count` (number, optional): Maximum number of commits to show (default: 10)
- `start_timestamp` (string, optional): Start timestamp for filtering commits. Accepts ISO 8601 format (e.g., '2024-01-15T14:30:25'), relative dates (e.g., '2 weeks ago', 'yesterday'), or absolute dates (e.g., '2024-01-15', 'Jan 15 2024')
- `end_timestamp` (string, optional): End timestamp for filtering commits. Accepts ISO 8601 format (e.g., '2024-01-15T14:30:25'), relative dates (e.g., '2 weeks ago', 'yesterday'), or absolute dates (e.g., '2024-01-15', 'Jan 15 2024')
- Returns: Array of commit entries with hash, author, date, and message
9. `git_create_branch`
@@ -82,13 +84,8 @@ Please note that mcp-server-git is currently in early development. The functiona
- `repo_path` (string): Path to Git repository
- `revision` (string): The revision (commit hash, branch name, tag) to show
- Returns: Contents of the specified commit
12. `git_init`
- Initializes a Git repository
- Inputs:
- `repo_path` (string): Path to directory to initialize git repo
- Returns: Confirmation of repository initialization
13. `git_branch`
12. `git_branch`
- List Git branches
- Inputs:
- `repo_path` (string): Path to the Git repository.
@@ -173,20 +170,22 @@ For quick installation, use one of the one-click install buttons below...
[![Install with Docker in VS Code](https://img.shields.io/badge/VS_Code-Docker-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=git&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22--rm%22%2C%22-i%22%2C%22--mount%22%2C%22type%3Dbind%2Csrc%3D%24%7BworkspaceFolder%7D%2Cdst%3D%2Fworkspace%22%2C%22mcp%2Fgit%22%5D%7D) [![Install with Docker in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Docker-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=git&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22--rm%22%2C%22-i%22%2C%22--mount%22%2C%22type%3Dbind%2Csrc%3D%24%7BworkspaceFolder%7D%2Cdst%3D%2Fworkspace%22%2C%22mcp%2Fgit%22%5D%7D&quality=insiders)
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing `Ctrl + Shift + P` and typing `Preferences: Open Settings (JSON)`.
For manual installation, you can configure the MCP server using one of these methods:
Optionally, you can add it to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
**Method 1: User Configuration (Recommended)**
Add the configuration to your user-level MCP configuration file. Open the Command Palette (`Ctrl + Shift + P`) and run `MCP: Open User Configuration`. This will open your user `mcp.json` file where you can add the server configuration.
> Note that the `mcp` key is not needed in the `.vscode/mcp.json` file.
**Method 2: Workspace Configuration**
Alternatively, you can add the configuration to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
> For more details about MCP configuration in VS Code, see the [official VS Code MCP documentation](https://code.visualstudio.com/docs/copilot/mcp).
```json
{
"mcp": {
"servers": {
"git": {
"command": "uvx",
"args": ["mcp-server-git"]
}
"servers": {
"git": {
"command": "uvx",
"args": ["mcp-server-git"]
}
}
}

View File

@@ -48,6 +48,14 @@ class GitReset(BaseModel):
class GitLog(BaseModel):
repo_path: str
max_count: int = 10
start_timestamp: Optional[str] = Field(
None,
description="Start timestamp for filtering commits. Accepts: ISO 8601 format (e.g., '2024-01-15T14:30:25'), relative dates (e.g., '2 weeks ago', 'yesterday'), or absolute dates (e.g., '2024-01-15', 'Jan 15 2024')"
)
end_timestamp: Optional[str] = Field(
None,
description="End timestamp for filtering commits. Accepts: ISO 8601 format (e.g., '2024-01-15T14:30:25'), relative dates (e.g., '2 weeks ago', 'yesterday'), or absolute dates (e.g., '2024-01-15', 'Jan 15 2024')"
)
class GitCreateBranch(BaseModel):
repo_path: str
@@ -62,8 +70,7 @@ class GitShow(BaseModel):
repo_path: str
revision: str
class GitInit(BaseModel):
repo_path: str
class GitBranch(BaseModel):
repo_path: str = Field(
@@ -83,6 +90,7 @@ class GitBranch(BaseModel):
description="The commit sha that branch should NOT contain. Do not pass anything to this param if no commit sha is specified",
)
class GitTools(str, Enum):
STATUS = "git_status"
DIFF_UNSTAGED = "git_diff_unstaged"
@@ -95,7 +103,7 @@ class GitTools(str, Enum):
CREATE_BRANCH = "git_create_branch"
CHECKOUT = "git_checkout"
SHOW = "git_show"
INIT = "git_init"
BRANCH = "git_branch"
def git_status(repo: git.Repo) -> str:
@@ -115,24 +123,51 @@ def git_commit(repo: git.Repo, message: str) -> str:
return f"Changes committed successfully with hash {commit.hexsha}"
def git_add(repo: git.Repo, files: list[str]) -> str:
repo.index.add(files)
if files == ["."]:
repo.git.add(".")
else:
repo.index.add(files)
return "Files staged successfully"
def git_reset(repo: git.Repo) -> str:
repo.index.reset()
return "All staged changes reset"
def git_log(repo: git.Repo, max_count: int = 10) -> list[str]:
commits = list(repo.iter_commits(max_count=max_count))
log = []
for commit in commits:
log.append(
f"Commit: {commit.hexsha!r}\n"
f"Author: {commit.author!r}\n"
f"Date: {commit.authored_datetime}\n"
f"Message: {commit.message!r}\n"
)
return log
def git_log(repo: git.Repo, max_count: int = 10, start_timestamp: Optional[str] = None, end_timestamp: Optional[str] = None) -> list[str]:
if start_timestamp or end_timestamp:
# Use git log command with date filtering
args = []
if start_timestamp:
args.extend(['--since', start_timestamp])
if end_timestamp:
args.extend(['--until', end_timestamp])
args.extend(['--format=%H%n%an%n%ad%n%s%n'])
log_output = repo.git.log(*args).split('\n')
log = []
# Process commits in groups of 4 (hash, author, date, message)
for i in range(0, len(log_output), 4):
if i + 3 < len(log_output) and len(log) < max_count:
log.append(
f"Commit: {log_output[i]}\n"
f"Author: {log_output[i+1]}\n"
f"Date: {log_output[i+2]}\n"
f"Message: {log_output[i+3]}\n"
)
return log
else:
# Use existing logic for simple log without date filtering
commits = list(repo.iter_commits(max_count=max_count))
log = []
for commit in commits:
log.append(
f"Commit: {commit.hexsha!r}\n"
f"Author: {commit.author!r}\n"
f"Date: {commit.authored_datetime}\n"
f"Message: {commit.message!r}\n"
)
return log
def git_create_branch(repo: git.Repo, branch_name: str, base_branch: str | None = None) -> str:
if base_branch:
@@ -147,12 +182,7 @@ def git_checkout(repo: git.Repo, branch_name: str) -> str:
repo.git.checkout(branch_name)
return f"Switched to branch '{branch_name}'"
def git_init(repo_path: str) -> str:
try:
repo = git.Repo.init(path=repo_path, mkdir=True)
return f"Initialized empty Git repository in {repo.git_dir}"
except Exception as e:
return f"Error initializing repository: {str(e)}"
def git_show(repo: git.Repo, revision: str) -> str:
commit = repo.commit(revision)
@@ -200,6 +230,7 @@ def git_branch(repo: git.Repo, branch_type: str, contains: str | None = None, no
return branch_info
async def serve(repository: Path | None) -> None:
logger = logging.getLogger(__name__)
@@ -271,15 +302,12 @@ async def serve(repository: Path | None) -> None:
description="Shows the contents of a commit",
inputSchema=GitShow.model_json_schema(),
),
Tool(
name=GitTools.INIT,
description="Initialize a new Git repository",
inputSchema=GitInit.model_json_schema(),
),
Tool(
name=GitTools.BRANCH,
description="List Git branches",
inputSchema=GitBranch.model_json_schema(),
)
]
@@ -316,15 +344,7 @@ async def serve(repository: Path | None) -> None:
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
repo_path = Path(arguments["repo_path"])
# Handle git init separately since it doesn't require an existing repo
if name == GitTools.INIT:
result = git_init(str(repo_path))
return [TextContent(
type="text",
text=result
)]
# For all other commands, we need an existing repo
# For all commands, we need an existing repo
repo = git.Repo(repo_path)
match name:
@@ -377,13 +397,19 @@ async def serve(repository: Path | None) -> None:
text=result
)]
# Update the LOG case:
case GitTools.LOG:
log = git_log(repo, arguments.get("max_count", 10))
log = git_log(
repo,
arguments.get("max_count", 10),
arguments.get("start_timestamp"),
arguments.get("end_timestamp")
)
return [TextContent(
type="text",
text="Commit history:\n" + "\n".join(log)
)]
case GitTools.CREATE_BRANCH:
result = git_create_branch(
repo,
@@ -420,7 +446,7 @@ async def serve(repository: Path | None) -> None:
type="text",
text=result
)]
case _:
raise ValueError(f"Unknown tool: {name}")

View File

@@ -1,7 +1,7 @@
import pytest
from pathlib import Path
import git
from mcp_server_git.server import git_checkout, git_branch
from mcp_server_git.server import git_checkout, git_branch, git_add
import shutil
@pytest.fixture
@@ -68,3 +68,26 @@ def test_git_branch_not_contains(test_repository):
result = git_branch(test_repository, "local", not_contains=commit.hexsha)
assert "another-feature-branch" not in result
assert "master" in result
def test_git_add_all_files(test_repository):
file_path = Path(test_repository.working_dir) / "all_file.txt"
file_path.write_text("adding all")
result = git_add(test_repository, ["."])
staged_files = [item.a_path for item in test_repository.index.diff("HEAD")]
assert "all_file.txt" in staged_files
assert result == "Files staged successfully"
def test_git_add_specific_files(test_repository):
file1 = Path(test_repository.working_dir) / "file1.txt"
file2 = Path(test_repository.working_dir) / "file2.txt"
file1.write_text("file 1 content")
file2.write_text("file 2 content")
result = git_add(test_repository, ["file1.txt"])
staged_files = [item.a_path for item in test_repository.index.diff("HEAD")]
assert "file1.txt" in staged_files
assert "file2.txt" not in staged_files
assert result == "Files staged successfully"

View File

@@ -190,25 +190,27 @@ For quick installation, use one of the one-click installation buttons below:
[![Install with Docker in VS Code](https://img.shields.io/badge/VS_Code-Docker-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=memory&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-i%22%2C%22-v%22%2C%22claude-memory%3A%2Fapp%2Fdist%22%2C%22--rm%22%2C%22mcp%2Fmemory%22%5D%7D) [![Install with Docker in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Docker-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=memory&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-i%22%2C%22-v%22%2C%22claude-memory%3A%2Fapp%2Fdist%22%2C%22--rm%22%2C%22mcp%2Fmemory%22%5D%7D&quality=insiders)
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing `Ctrl + Shift + P` and typing `Preferences: Open Settings (JSON)`.
For manual installation, you can configure the MCP server using one of these methods:
Optionally, you can add it to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
**Method 1: User Configuration (Recommended)**
Add the configuration to your user-level MCP configuration file. Open the Command Palette (`Ctrl + Shift + P`) and run `MCP: Open User Configuration`. This will open your user `mcp.json` file where you can add the server configuration.
> Note that the `mcp` key is not needed in the `.vscode/mcp.json` file.
**Method 2: Workspace Configuration**
Alternatively, you can add the configuration to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
> For more details about MCP configuration in VS Code, see the [official VS Code MCP documentation](https://code.visualstudio.com/docs/copilot/mcp).
#### NPX
```json
{
"mcp": {
"servers": {
"memory": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-memory"
]
}
"servers": {
"memory": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-memory"
]
}
}
}
@@ -218,19 +220,17 @@ Optionally, you can add it to a file called `.vscode/mcp.json` in your workspace
```json
{
"mcp": {
"servers": {
"memory": {
"command": "docker",
"args": [
"run",
"-i",
"-v",
"claude-memory:/app/dist",
"--rm",
"mcp/memory"
]
}
"servers": {
"memory": {
"command": "docker",
"args": [
"run",
"-i",
"-v",
"claude-memory:/app/dist",
"--rm",
"mcp/memory"
]
}
}
}
@@ -276,6 +276,8 @@ Docker:
docker build -t mcp/memory -f src/memory/Dockerfile .
```
For Awareness: a prior mcp/memory volume contains an index.js file that could be overwritten by the new container. If you are using a docker volume for storage, delete the old docker volume's `index.js` file before starting the new container.
## License
This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.

View File

@@ -60,8 +60,18 @@ class KnowledgeGraphManager {
private async saveGraph(graph: KnowledgeGraph): Promise<void> {
const lines = [
...graph.entities.map(e => JSON.stringify({ type: "entity", ...e })),
...graph.relations.map(r => JSON.stringify({ type: "relation", ...r })),
...graph.entities.map(e => JSON.stringify({
type: "entity",
name: e.name,
entityType: e.entityType,
observations: e.observations
})),
...graph.relations.map(r => JSON.stringify({
type: "relation",
from: r.from,
to: r.to,
relationType: r.relationType
})),
];
await fs.writeFile(MEMORY_FILE_PATH, lines.join("\n"));
}
@@ -219,10 +229,12 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
},
},
required: ["name", "entityType", "observations"],
additionalProperties: false,
},
},
},
required: ["entities"],
additionalProperties: false,
},
},
{
@@ -241,10 +253,12 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
relationType: { type: "string", description: "The type of the relation" },
},
required: ["from", "to", "relationType"],
additionalProperties: false,
},
},
},
required: ["relations"],
additionalProperties: false,
},
},
{
@@ -266,10 +280,12 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
},
},
required: ["entityName", "contents"],
additionalProperties: false,
},
},
},
required: ["observations"],
additionalProperties: false,
},
},
{
@@ -285,6 +301,7 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
},
},
required: ["entityNames"],
additionalProperties: false,
},
},
{
@@ -306,10 +323,12 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
},
},
required: ["entityName", "observations"],
additionalProperties: false,
},
},
},
required: ["deletions"],
additionalProperties: false,
},
},
{
@@ -328,11 +347,13 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
relationType: { type: "string", description: "The type of the relation" },
},
required: ["from", "to", "relationType"],
additionalProperties: false,
},
description: "An array of relations to delete"
},
},
required: ["relations"],
additionalProperties: false,
},
},
{
@@ -341,6 +362,7 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
inputSchema: {
type: "object",
properties: {},
additionalProperties: false,
},
},
{
@@ -352,6 +374,7 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
query: { type: "string", description: "The search query to match against entity names, types, and observation content" },
},
required: ["query"],
additionalProperties: false,
},
},
{
@@ -367,6 +390,7 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
},
},
required: ["names"],
additionalProperties: false,
},
},
],
@@ -376,6 +400,10 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === "read_graph") {
return { content: [{ type: "text", text: JSON.stringify(await knowledgeGraphManager.readGraph(), null, 2) }] };
}
if (!args) {
throw new Error(`No arguments provided for tool: ${name}`);
}
@@ -396,8 +424,6 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
case "delete_relations":
await knowledgeGraphManager.deleteRelations(args.relations as Relation[]);
return { content: [{ type: "text", text: "Relations deleted successfully" }] };
case "read_graph":
return { content: [{ type: "text", text: JSON.stringify(await knowledgeGraphManager.readGraph(), null, 2) }] };
case "search_nodes":
return { content: [{ type: "text", text: JSON.stringify(await knowledgeGraphManager.searchNodes(args.query as string), null, 2) }] };
case "open_nodes":

View File

@@ -88,25 +88,27 @@ For quick installation, click one of the installation buttons below...
[![Install with Docker in VS Code](https://img.shields.io/badge/VS_Code-Docker-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=sequentialthinking&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22--rm%22%2C%22-i%22%2C%22mcp%2Fsequentialthinking%22%5D%7D) [![Install with Docker in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Docker-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=sequentialthinking&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22--rm%22%2C%22-i%22%2C%22mcp%2Fsequentialthinking%22%5D%7D&quality=insiders)
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing `Ctrl + Shift + P` and typing `Preferences: Open Settings (JSON)`.
For manual installation, you can configure the MCP server using one of these methods:
Optionally, you can add it to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
**Method 1: User Configuration (Recommended)**
Add the configuration to your user-level MCP configuration file. Open the Command Palette (`Ctrl + Shift + P`) and run `MCP: Open User Configuration`. This will open your user `mcp.json` file where you can add the server configuration.
> Note that the `mcp` key is not needed in the `.vscode/mcp.json` file.
**Method 2: Workspace Configuration**
Alternatively, you can add the configuration to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
> For more details about MCP configuration in VS Code, see the [official VS Code MCP documentation](https://code.visualstudio.com/docs/copilot/mcp).
For NPX installation:
```json
{
"mcp": {
"servers": {
"sequential-thinking": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-sequential-thinking"
]
}
"servers": {
"sequential-thinking": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-sequential-thinking"
]
}
}
}
@@ -116,17 +118,15 @@ For Docker installation:
```json
{
"mcp": {
"servers": {
"sequential-thinking": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"mcp/sequentialthinking"
]
}
"servers": {
"sequential-thinking": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"mcp/sequentialthinking"
]
}
}
}

View File

@@ -206,12 +206,12 @@ You should:
},
thoughtNumber: {
type: "integer",
description: "Current thought number",
description: "Current thought number (numeric value, e.g., 1, 2, 3)",
minimum: 1
},
totalThoughts: {
type: "integer",
description: "Estimated total thoughts needed",
description: "Estimated total thoughts needed (numeric value, e.g., 5, 10)",
minimum: 1
},
isRevision: {

View File

@@ -32,5 +32,8 @@ COPY --from=uv --chown=app:app /app/.venv /app/.venv
# Place executables in the environment at the front of the path
ENV PATH="/app/.venv/bin:$PATH"
# when running the container, add --db-path and a bind mount to the host's db file
ENTRYPOINT ["mcp-server-time"]
# Set the LOCAL_TIMEZONE environment variable
ENV LOCAL_TIMEZONE=${LOCAL_TIMEZONE:-"UTC"}
# when running the container, add --local-timezone and a bind mount to the host's db file
ENTRYPOINT ["mcp-server-time", "--local-timezone", "${LOCAL_TIMEZONE}"]

View File

@@ -64,7 +64,7 @@ Add to your Claude settings:
"mcpServers": {
"time": {
"command": "docker",
"args": ["run", "-i", "--rm", "mcp/time"]
"args": ["run", "-i", "--rm", "-e", "LOCAL_TIMEZONE", "mcp/time"]
}
}
}

View File

@@ -22,6 +22,7 @@ class TimeTools(str, Enum):
class TimeResult(BaseModel):
timezone: str
datetime: str
day_of_week: str
is_dst: bool
@@ -45,7 +46,8 @@ def get_local_tz(local_tz_override: str | None = None) -> ZoneInfo:
local_tzname = get_localzone_name()
if local_tzname is not None:
return ZoneInfo(local_tzname)
raise McpError("Could not determine local timezone - tzinfo is None")
# Default to UTC if local timezone cannot be determined
return ZoneInfo("UTC")
def get_zoneinfo(timezone_name: str) -> ZoneInfo:
@@ -64,6 +66,7 @@ class TimeServer:
return TimeResult(
timezone=timezone_name,
datetime=current_time.isoformat(timespec="seconds"),
day_of_week=current_time.strftime("%A"),
is_dst=bool(current_time.dst()),
)
@@ -104,11 +107,13 @@ class TimeServer:
source=TimeResult(
timezone=source_tz,
datetime=source_time.isoformat(timespec="seconds"),
day_of_week=source_time.strftime("%A"),
is_dst=bool(source_time.dst()),
),
target=TimeResult(
timezone=target_tz,
datetime=target_time.isoformat(timespec="seconds"),
day_of_week=target_time.strftime("%A"),
is_dst=bool(target_time.dst()),
),
time_difference=time_diff_str,

View File

@@ -2,8 +2,10 @@
from freezegun import freeze_time
from mcp.shared.exceptions import McpError
import pytest
from unittest.mock import patch
from zoneinfo import ZoneInfo
from mcp_server_time.server import TimeServer
from mcp_server_time.server import TimeServer, get_local_tz
@pytest.mark.parametrize(
@@ -458,3 +460,69 @@ def test_convert_time(test_time, source_tz, time_str, target_tz, expected):
assert result.source.is_dst == expected["source"]["is_dst"]
assert result.target.is_dst == expected["target"]["is_dst"]
assert result.time_difference == expected["time_difference"]
def test_get_local_tz_with_override():
"""Test that timezone override works correctly."""
result = get_local_tz("America/New_York")
assert str(result) == "America/New_York"
assert isinstance(result, ZoneInfo)
def test_get_local_tz_with_invalid_override():
"""Test that invalid timezone override raises an error."""
with pytest.raises(Exception): # ZoneInfo will raise an exception
get_local_tz("Invalid/Timezone")
@patch('mcp_server_time.server.get_localzone_name')
def test_get_local_tz_with_valid_iana_name(mock_get_localzone):
"""Test that valid IANA timezone names from tzlocal work correctly."""
mock_get_localzone.return_value = "Europe/London"
result = get_local_tz()
assert str(result) == "Europe/London"
assert isinstance(result, ZoneInfo)
@patch('mcp_server_time.server.get_localzone_name')
def test_get_local_tz_when_none_returned(mock_get_localzone):
"""Test default to UTC when tzlocal returns None."""
mock_get_localzone.return_value = None
result = get_local_tz()
assert str(result) == "UTC"
@patch('mcp_server_time.server.get_localzone_name')
def test_get_local_tz_handles_windows_timezones(mock_get_localzone):
"""Test that tzlocal properly handles Windows timezone names.
Note: tzlocal should convert Windows names like 'Pacific Standard Time'
to proper IANA names like 'America/Los_Angeles'.
"""
# tzlocal should return IANA names even on Windows
mock_get_localzone.return_value = "America/Los_Angeles"
result = get_local_tz()
assert str(result) == "America/Los_Angeles"
assert isinstance(result, ZoneInfo)
@pytest.mark.parametrize(
"timezone_name",
[
"America/New_York",
"Europe/Paris",
"Asia/Tokyo",
"Australia/Sydney",
"Africa/Cairo",
"America/Sao_Paulo",
"Pacific/Auckland",
"UTC",
],
)
@patch('mcp_server_time.server.get_localzone_name')
def test_get_local_tz_various_timezones(mock_get_localzone, timezone_name):
"""Test various timezone names that tzlocal might return."""
mock_get_localzone.return_value = timezone_name
result = get_local_tz()
assert str(result) == timezone_name
assert isinstance(result, ZoneInfo)