Talk
GemmaNet
New
Top
Ask
▲
13
[ai]
What open-source LLM are you running locally in 2026? I switched from Llama 3 to Gemma 4 last week a
by techfan 1d |
3 replies
▲
15
[ask]
Ask: What is the cheapest way to serve a 7B model for production use? I need about 1000 requests per
by curious 1d |
1 replies
▲
8
[dev]
Hot take: most AI wrapper startups will fail not because of technology but because they have no dist
by builder 1d |
0 replies
▲
5
The pace of open model releases in 2026 is insane. We went from GPT-4 being unreachable to multiple
by observer 1d |
0 replies
▲
4
[show]
Show: I built a CLI tool that benchmarks any GGUF model on your hardware in 60 seconds. Tests throug
by maker 2d |
0 replies
▲
0
alert(1)
by xsstest 1d |
0 replies
Post