TLDR; Wasted a day on trying out local LLMs as reflection assistants for my journal with underwhelming results.

As the year turned, I’m still reflecting on 2024 and contemplating how to approach 2025. I’ve kept somewhat regular journals during the year. Reading through the weekly reviews has been very helpful in processing the last 12 months. Neverthless, there is a lot of information and I didn’t go through each and every daily journal. In addition, I’ve logged highlights from other sources such as blog posts, Youtube videos, etc.

Since I was anyway inspired to review my setup (I use Obsidian on mobile and desktop), I decided to explore local LLMs as potential reflection assistants. Privacy is important to me, therefore the emphasis on locality. Having had reviewed my journal manually already, I knew what a high quality bar of analysis would be.

Unfortunately, I ended up sinking a good chunk of my day off (Ascension is a public holiday in Finland) for not much benefit. At least I did learn something about local LLMs and a few apps. In fact one of them, LM Studio is user-friendly enough and I’d keep it on my machine for more tinkering.

One of the first hurdles is the ability to ingest a number of files, i.e. my Obsidian vault. The first option I tried was NVidia’s Chat With RTX. In principle, it supports exactly that. Except that it doesn’t have official support for .md files. Luckily, there was supposed to be a simple tweak to cheat it into ingesting them. However, it seemed it would just ingest 3 files and I couldn’t figure out why. As a side note, I had lost my patience debugging its installation via the NVidia app failing, when the the installer from the website worked out of the box.

Next I moved onto the already mentioned LM Studio. It was really straight-forward to set up and get going. It does support uploading up to 5 files, incl. some size limitation. So I resorted to Co-Pilot writing a script of merging my Obsidian vaults files into one .txt file, this went swimmingly. I used the default Llama 3.2 1B model. With some prodding I did get some meaningful answers, but it was clear as day I can’t trust it. Still I’m up from trying out other models as it’s so easy to obtain them.

The final attempt for the day was with PrivateGPT.dev. It’s also supposed to be able to ingest a number of files. As it turns out, you can’t specify a parent directory. This one required more of a setup and I was trying to be careful with the Python environments on my machine. After some fiddling downloading dependencies from left and right, ultimately the web UI ran… and failed miserably importing my files. I threw the towel.

In retrospect, I’m disappointed with this experience, but there is still some curiosity left in me to fiddle more with this. In principle, analyzing my writing for insights should be an excellent application for LLMs. Alas, my first take was a resounding failure. At least I did learn a few things and got to do something technical on my day off. Now… let’s try passing this post through LM studio for proof-reading.

P.S.

Assistant llama-3.2-1b-instruct

Spelling and Grammar Mistakes:

  • TLDR" should be spelled as “TODAY’S SUMMARY”

I can’t.