Connecting GitHub with Claude Code through MCP
I've been exploring ways to give Claude Code more context about my projects, and the GitHub MCP server caught my attention. Setting it up turned out to be straightforward.
Getting the chores done
I've got this CLI tool I created a while ago and hardly changed in years. Nothing fancy - a todo manager that grew organically as I needed. Started with a quick bash script, added a feature here, bolted on another there, and before I knew it I've got a codebase that works but makes me wince when I look inside.
A containerised sandbox for Claude code
I've been using Claude Code for a few weeks now, watching it navigate codebases and execute commands with impressive competence. It's genuinely helpful – the kind of tool that makes you wonder how you managed without it. But there's this nagging voice in the back of every platform engineer's head that asks uncomfortable questions at 2 AM: What if it decides to explore that .ssh
directory?
When code speaks the business language
I've been working with AI coding assistants for months now, and there's this recurring pattern that initially frustrated me but eventually became enlightening. You ask Claude (other AI tooling exists) to help with a method called processData()
, and it generates comprehensive tests for data validation and transformation. The tests are perfectly written and completely wrong – because the method actually handles user authentication.
Building controlled context with Markdown reader MCP
I was deep in a coding session last week when it hit me: my AI assistant had no clue about the deployment process I'd carefully documented two months ago. Or the coding standards my team had hammered out. Or the architectural decisions that explained why half the codebase looked the way it did.
Running local AI code assist to power your IDEs with Ollama
I'm intrigued in how effective it is to run code assist models locally. I'm keen to explore the available IDE extensions and AI models. Let's start with VSCode, the Code GPT extension and models run locally with Ollama.
Locally running GenAI and large language models with Ollama
If you are interested in exploring Generative AI without relying on cloud services, Ollama can run open models entirely locally, giving you a chance to explore GenAI APIs and capabilities.