Track_Shovel@slrpnk.net to Lemmy Shitpost@lemmy.worldEnglish · 4 个月前Hexadecimalslrpnk.netimagemessage-square136fedilinkarrow-up11.07Karrow-down124
arrow-up11.04Karrow-down1imageHexadecimalslrpnk.netTrack_Shovel@slrpnk.net to Lemmy Shitpost@lemmy.worldEnglish · 4 个月前message-square136fedilink
minus-squarecoldsideofyourpillow@lemmy.cafelinkfedilinkEnglisharrow-up1·4 个月前By running it locally. The local models don’t have any censorship.
minus-squareCharlxmagne@lemmy.worldlinkfedilinkarrow-up3·4 个月前They do by default but like I said it’s open source so you can tweak it to not be.
minus-squarevvilld@lemmy.worldlinkfedilinkarrow-up1·4 个月前I meant, how does one run it locally. I see a lot of people saying to just “run it locally” but for someone without a background in coding that doesn’t really mean much.
minus-squarecoldsideofyourpillow@lemmy.cafelinkfedilinkEnglisharrow-up1·edit-24 个月前You don’t need a background in coding at all. In fact, the spaces of machine learning and programming are almost completely seperate. Download Ollama. Depending on the power of your GPU, run one of the following commands: DeepSeek-R1-Distill-Qwen-1.5B: ollama run deepseek-r1:1.5b DeepSeek-R1-Distill-Qwen-7B: ollama run deepseek-r1:7b DeepSeek-R1-Distill-Llama-8B: ollama run deepseek-r1:8b DeepSeek-R1-Distill-Qwen-14B: ollama run deepseek-r1:14b DeepSeek-R1-Distill-Qwen-32B: ollama run deepseek-r1:32b DeepSeek-R1-Distill-Llama-70B: ollama run deepseek-r1:70b Bigger models means better output, but also longer generation times.
How?
By running it locally. The local models don’t have any censorship.
They do by default but like I said it’s open source so you can tweak it to not be.
I meant, how does one run it locally. I see a lot of people saying to just “run it locally” but for someone without a background in coding that doesn’t really mean much.
You don’t need a background in coding at all. In fact, the spaces of machine learning and programming are almost completely seperate.
Download Ollama.
Depending on the power of your GPU, run one of the following commands:
DeepSeek-R1-Distill-Qwen-1.5B:
ollama run deepseek-r1:1.5b
DeepSeek-R1-Distill-Qwen-7B:
ollama run deepseek-r1:7b
DeepSeek-R1-Distill-Llama-8B:
ollama run deepseek-r1:8b
DeepSeek-R1-Distill-Qwen-14B:
ollama run deepseek-r1:14b
DeepSeek-R1-Distill-Qwen-32B:
ollama run deepseek-r1:32b
DeepSeek-R1-Distill-Llama-70B:
ollama run deepseek-r1:70b
Bigger models means better output, but also longer generation times.