
One of my favorite projects has been using my Mac Mini as a home data center. It hosts this blog and all of my projects. After using cloud machines for decades, there's something tactile about maintaining this machine myself. Plus, it gives me agency - it's my computer, and I don't have to pay rent to somebody every month to keep using it.
For the past few months, I have been contemplating running a local LLM in my home data center. OpenAI and Anthropic's AI is a service with usage-based pricing. That didn't quite fit the philosophy of my home data center. I didn't want to have to worry about what my OpenAI bill would be at the end of the month for my home projects.
There's something constraining about having to bean-count every AI request. Programmers aren't used to it today - compute is abundant. Having to pay for every single AI request discourages developers from playing with AI. And, play is when good ideas happen.
As powerful and disruptive as AI has been, it's really incredible that some models are open, so you can run them on your own machines. The problem is that my home computer is too small to run most models. I considered getting an external GPU or a second, bigger computer - but that started to seem work like work instead of fun.
When OpenAI released their open-source models last week, inspiration struck. The smaller of their two models could run on my Mac Mini. So, I quickly installed it with Ollama and switched my projects to use the local AI. My Toolbox now has a brain, and it's pretty smart.
I'm now entering a phase of experimenting with AI. I can put it everywhere, and not worry about a surprise bill. I can experiment with giving it tools - access to my Postcard database, or the ability to send an email. I can do dumb and irresponsible things - like have it go through every email I've sent over the last 20 years to apply labels or analyze questions.
Having a local LLM has freed me to experiment. And, I'm looking forward to sharing some of my projects and learnings along the way.