Conversational Analytics with Claude's MCP
If you haven't yet heard about MCP or Model Context Protocol, it is described as:
"An open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools."
This architecture diagram does a great job at showcasing the use cases:

MCPs allow LLMs and other clients to interface with external systems. Beyond simple REST wrappers, MCPs can allow for validation, processing, and more in between the model and system.
Popular MCPs include:
- PostgreSQL: For reading database data.
- GitHub: Interact with repos.
- Brave Search: Search the web.
- Fetch: Make HTTP requests.
- Puppeteer: Browse the web.
- And many others found here
What's exciting is that MCPs can be run like NPX scripts that let the model interact with JavaScript programs, Docker containers allowing for things like Puppeteer, and others. And it isn't just integration with Claude, there's over a dozen listed in the docs, among which are recognizable names like Windsurf, Cursor, Continue, and others. Truly unlimited possibilities for letting models interact with systems.
This post isn't about the MCP spec though, if you're interested, you can read it here: https://github.com/modelcontextprotocol/specification
Especially over the past few days, the hype on X around MCP has been intense. That doesn't mean MCP isn't useful though. To demonstrate this technology, I decided to create an MCP for analytics analysis. We've all used ChatGPT's Data Analysts feature for years now, but one area where it lacks is that you have to give it the data. What if an LLM could find the data to analyze itself? That would be much more useful beyond simply configuring charts based on CSVs in a Jupyter Notebook (which, don't get me wrong, is impressive).
Now all you have to do is ask:
Help me get viewer counts for the last two weeks from fathom

To accomplish this, I began by utilizing the Fathom Analytics API. It is powerful and has generous rate limits for tasks like this.
While making HTTP calls directly was alright, but I wanted something more polished, so I decided to create a Typescript client for the API that would provide input validation (the API responses weren't that helpful, so extra feedback may be beneficial for the LLM). Surprisingly, there wasn't an API client/SDK already made, so I made one.
You can install it from npm or jsr using npm install @mackenly/fathom-api
and find examples here. The only dependency it has is Zod for input validation, so it is widely compatible across a variety of runtimes.
This made interacting with the API really simple and allowed me to abstract convenience methods to avoid having to manage things like pagination manually:
import { FathomApi, FathomApiError } from '@mackenly/fathom-api';
// Create a new client with your API token
const fathom = new FathomApi({
token: 'your-api-token'
});
// Example: List all sites
async function listSites() {
try {
const sites = await fathom.api.getAllSites();
console.log('Sites:', sites);
} catch (error) {
if (error instanceof FathomApiError) {
console.error('API Error:', error.message);
} else {
console.error('Error:', error);
}
}
}
listSites();
With this published, I then moved on to building the MCP using the MCP Typescript SDK. The SDK makes it super easy to get going and the Inspector allows you to test your MCP in a browser.
This brings us to the finished MCP. It is open source, of course, on mackenly/mcp-fathom-analytics and because I'm using JavaScript it is as easy as having a recent node version installed then adding the following to your config and getting an API key:
{
"mcpServers": {
"fathom-analytics": {
"command": "npx",
"args": [
"-y",
"mcp-fathom-analytics"
],
"env": {
"FATHOM_API_KEY": "your_api_key_here"
}
}
}
}
With the MCP installed in Claude Desktop, all I had to do was ask, and I got a fully detailed analysis with data visualizations.
Look at my Fathom sites. Find me the most popular sites and details about traffic referrals. Plot this is a chart or graph.
The response:
# Top Sites by Traffic
I've analyzed your Fathom Analytics account and identified the most popular sites. All domain names have been redacted as requested and labeled as "Site 1", "Site 2", etc.
## Key Findings:
1. Most Pageviews: Site 5 leads with 111,183 pageviews but has relatively fewer visits (9,248), indicating users view many pages per session.
2. Most Visits: Site 9 has the highest number of visits at 63,229, with 101,522 pageviews, making it your second most viewed site.
3. Engagement: Site 5 has an exceptional engagement rate with 12.02 pages per visit, significantly higher than other sites (which range from 1.38 to 1.80 pages per visit).
## Traffic Referral Sources:
1. Site 9 (highest visits):
- Search engines drive most traffic (Google: 68%, Bing: 1.3%, DuckDuckGo: 1%)
- Direct traffic accounts for 27% of visits
- Social media (primarily Facebook) drives about 2.4% of traffic
2. Site 5 (highest pageviews):
- Direct traffic is dominant at 72% of visits
- Google search brings 24% of visitors
- Other search engines contribute smaller percentages (Bing: 2.5%, Yandex: 0.4%)
## Analysis
- Site 5 has exceptional engagement (12.02 pages per visit) but relies heavily on direct traffic (72%)
- Site 9 gets most of its traffic from search engines (primarily Google at 68%)
- Your sites generally show good traffic diversity with a mix of search, direct, and social media referrals
Charts/graphs:



Pretty awesome I think! I hope that the MCP and/or package is useful. If you have any feedback, questions, or feature requests, leave a comment below or reach out.