AI General Thread

Started by Legend, Dec 05, 2022, 04:35 AM

0 Members and 3 Guests are viewing this topic.

Legend

Always funny how they try to save face. "We're both right!"

Quote from: the-Pi-guy on Jan 24, 2026, 09:56 PMhttps://ollama.com/huihui_ai/mistral-small-abliterated

This one is a 14 GB model. 

My system only has 10 GB VRAM, so it uses some system RAM. 


Oh, way too much. I need something lightweight. Don't know what I am doing.

the-pi-guy

Quote from: Legend on Today at 12:48 AMAlways funny how they try to save face. "We're both right!"


Oh, way too much. I need something lightweight. Don't know what I am doing.
How much VRAM you got? 

And how much RAM? 

Legend

Quote from: the-Pi-guy on Today at 12:55 AMHow much VRAM you got?

And how much RAM?
I'm exploring using an llm locally inside a videogame, so not much  :P


Probably not worth the hassle.

kitler53

i'm watching a presentation right now about how my company wants to use AI for backlog creation.    what is being shown is the most ridiculously "waterfall" backlog creation i've ever seen.   this is just going to generate junk data that will waste a massive amount of time dispositioning why user stories exist...
         

Featured Artist: Emily Rudd

kitler53

lol, i was just presented a chart where they were bragging about the massive efficiency gains teams will have by adopting AI.   

i quickly screen capped the chart and half the datapoints formally records the teams having worse efficiency since adopting AI.  but they fit some very non-linear data to a line and extrapolated the line and came to the conclusion that more adoption will lead to massive efficiency gains.  
         

Featured Artist: Emily Rudd

the-pi-guy

Quote from: Legend on Today at 03:16 AMI'm exploring using an llm locally inside a videogame, so not much  :P


Probably not worth the hassle.
I could mess around. But I'm not sure that I've seen any super small models that I was happy with. 


There were some really bad ones, that were weird. I almost think it was Gemma 3 or maybe it was Llama 2 or something. I tried like a smaller like 4b model, and some of it's response weren't even things I would recognize as words. It would be like "I wunt t' thuh park yesterday".  Weird stuff like that. 

Whereas I feel like when you get to the ~27b space, I feel like some of those models are like 80-90% as good as the gigantic ones. 

the-pi-guy

Z-Image Base was released today.  


Z-Image-Turbo was a quicker model with more focus on photorealistic imagery. Z-Image Base is a bit worse image quality wise, but with far greater variability.