While learning the ropes with Home Assistant, I set up a dashboard that gives me access to all my smart devices and other information in a single view. From the default cards on the Home Assistant dashboard, the weather forecast details appear overwhelming with icons and numbers. Drawing on the experiences of my XDA colleagues in running local LLMs, I wanted to explore whether generative AI models could enhance a weather report and provide clothing suggestions.
I considered picking cloud-based providers, such as Google’s Gemini AI or OpenAI’s ChatGPT, to derive responses based on the weather conditions. But being privacy-conscious, I wanted to run the LLM locally to generate text-centric summaries without sending out any details. I teamed Home Assistant with Ollama to get a perfect weather report on my dashboard. And with some tinkering around, you can too. The only downside is that you’ll need capable hardware and patience to manage the resources effectively. I easily integrated Ollama with Home Assistant on a Raspberry Pi 4 B.
Laying the stepping stones to make Ollama talk with the Home Assistant
Make them deal with the weather information
I often struggle to make sense of the weather forecast that appears in numbers and icons. So, I wanted to summarize that weather summary and get better clothing tips to help me survive the weather. For that, I first configured the Pirate Weather integration to show different forecast attributes for a week. I subscribed to the free plan and generated the API key to use in Pirate Weather's Home Assistant integration.
To use Ollama with Home Assistant, I tried two options: running the LLM on my Pi 4B with Home Assistant and running it on my MacBook while exposing it to my home network. Downloading and setting up Ollama barely took a few minutes. Then, I downloaded generative AI models: Llama3 on my M1 MacBook Air (8GB of RAM) and TinyLlama on my Raspberry Pi 4 (4GB of RAM). To test the Ollama server on both machines, I copied and pasted the weather forecast details from the weather card on the Home Assistant dashboard.
To make Ollama bind and listen to all interfaces, set the Environment="OLLAMA_HOST=0.0.0.0" variable on Windows, Linux, and macOS.
On Pi 4, Ollama gave me restructured details of those numbers, which were proof enough that the generative AI was functional. I reworked the prompt text multiple times until I got an acceptable response from the TinyLlama model. Meanwhile, the Llama3 model worked fine from the get-go.

Ollama is a platform to download and run various open-source large language models (LLM) on your local computer.

Storage MicroSD card slot
CPU Arm Cortex-a72 (quad-core, 1.8GHz)
Choosing Home Assistant automation prompt for the local LLM
Difficulty will directly depend on your prompt
Setting up an automation involving a local LLM requires a little extra work. The automation would retrieve the weather forecast data, send it to the local LLM, save the response, and display it on the Home Assistant dashboard. I replicated the assist pipeline that my colleague, Adam Conway, created to get dynamic notifications, albeit with a few modifications to suit my usage. One thing in common is piping the prompt to LLM and its response back to the Home Assistant.
After installing the official Ollama integration, I configured it to use TinyLlama on my Pi and Llama3 on my MacBook. The next step involved selecting Ollama from the Conversation Agent dropdown in the Voice Assistant section of Home Assistant. To test the responses, I set up two automations: one with TinyLlama to receive a shorter response as a phone notification and the other as Markdown text for a dashboard card in Home Assistant. Based on the goal and the AI model’s capabilities, I had to create two prompts. Here’s my prompt for automation that works with the Llama3 model:
{% set today_fc = wx['weather.pirateweather']['forecast'][0]if wx and wx.get('weather.pirateweather') else {} %}
You are a weather expert. Based on the following weather data:
{{ today_fc | to_json }} Generate a short and friendly notification for the user. Include some clothing recommendations (e.g., if a coat, umbrella, or light clothing is needed) and the day's high and low temperatures. Keep it concise to show it as a phone notification.
For the TinyLlama-centric automation, I used a short, crisp prompt:
{% set fc = wx['weather.pirateweather']['forecast'][0] if wx and wx.get('weather.pirateweather') else {} %}{{ fc.temperature }}°C, {{ fc.condition }}, {{ fc.precipitation_probability }}% rain. What should I wear?
I used default automation troubleshooting features to tackle some errors and utilized the Developer Tools section to test the entity states and actions. The most challenging part was tweaking the prompt text to get an acceptable response from TinyLlama. Meanwhile, Llama3 worked flawlessly from the start.
Weather summary from a local LLM has been a game-changer for me
My family and I make the best of the local LLM’s clothing suggestions and weather report on the Home Assistant’s dashboard. I ensure my comfort with weather-suitable clothes that can get me through the day. In this project, I learned that the Tinyllama model is sufficient to generate a two-line summary on a low-powered SBC like the Raspberry Pi 4. I achieved better results using the Llama3 model with Ollama running on my Mac with 8GB of RAM and a more powerful CPU. That gives me enough to self-host local LLMs and Home Assistant on a mini PC for improved responsiveness and better results for my evolving smart home automation needs.
