The quantized model you can run locally works decently and they can't read any of it, which is nice.
I use that one specifically https://huggingface.co/lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf
If you're looking for a relatively user-friendly software to use it, you can look at GPT4All (open source) or LM Studio.