Still fighting an ear ache and now both ears feel stuffed like I’m driving through mountains! 😔

Woke up with an ear ache in my left ear. I wouldn’t be surprised if I contributed to it’s arrival by watching my grandkids play baseball in a could drizzle yesterday! 😩

Looking forward to dinner and the Bonnie Raitt concert tonight! I’ve probably seen her in concert over 30 times since the 70s! 😀

Client loved the progress made with using local LLM and critical document identification. My two favorite tools are Ollama and Simon WIllison’s LLM. I love working in the terminal! 🖥️

Primary day here in Pennsylvania but I’m registered as an Independent so I don’t get to vote! Stupid! 😠

Last night I finally realized the embeddings model you choose has a great impact on your RAG workflow and the quality of the answers returned from the LLM. A lot of testing today and tomorrow.

With the release of llama 3 from Meta yesterday afternoon the local llm capabilities have gotten more powerful!

I’m going to look at Simon Willison’s llm command line tool again as part of my toolchain for the project I’m working on. Sometimes the shiny objects (PrivateGPT) are more of a distraction than a solution.

Had a somewhat frustrating day working with PrivateGPT. I don’t think it’s going to the answer I’m looking for.

Design issues in FeedLand blogrolls. I would really love to have the same blogroll on my Micro.blog site.

Plan to continue working with Ollama and PrivateGPT today. I want to figure out the best way to fine tune what documents my LLMs return based on the content of a control document. Basically there are 15 questions that have answers in a folder full of 126 source pdf documents but so far the LLM is not being consistant.