AI General Thread

Started by Legend, Dec 05, 2022, 04:35 AM

0 Members and 1 Guest are viewing this topic.

Legend



I really hope the west wins the AI war...

kitler53

Quote from: Legend on Jan 23, 2025, 11:36 PMI really hope the west wins the AI war...
you think modern america is any better?

all the recent book bans.  making it illegal to talk about LBGT or acknowledge Washington owned slaves florida.   OK mandating the ten commandments to be posted in schools.   ..and 1/3 of our country believes the 2020 election was stolen from trump via a vast democratic conspiracy.

faux news is conservative state media.
         

Featured Artist: Emily Rudd

Legend

Quote from: kitler53 on Jan 24, 2025, 02:20 PMyou think modern america is any better?

all the recent book bans.  making it illegal to talk about LBGT or acknowledge Washington owned slaves florida.  OK mandating the ten commandments to be posted in schools.  ..and 1/3 of our country believes the 2020 election was stolen from trump via a vast democratic conspiracy.

faux news is conservative state media.
Non of that is comparable to china lol.

kitler53

Quote from: Legend on Jan 24, 2025, 03:50 PMNon of that is comparable to china lol.
but it's the direction we are heading in and i guarantee you trump would love to have AI be as biased as china's AI in favor of himself.

to be frank AI is trained on the BS people write and so long as people are biased AI will be biased.   it's part of it's fundamental danger to humanity.
         

Featured Artist: Emily Rudd

the-pi-guy


Legend


the-pi-guy

It's huge. Most of the other models you can download are under 20 GB. 

Legend

Quote from: the-Pi-guy on Jan 28, 2025, 06:07 PMIt's huge. Most of the other models you can download are under 20 GB.
Oh for lower end ones sure. Meta also lets you download their high end models and most are comparable and the 405B model is significantly larger.

the-pi-guy

I just noticed the fp16 model of deepseek is 1.3TB

Spoiler for NSFW story:
It feels a little funny that a lot of the best models can&#39;t by default make erotic content (at least of the text kind), yet there&#39;s definitely a market out there for that. Making photos and video could make a lot of room for problematic issues, but I think text stories would be cool. <br><br>I set up an uncensored version of llama yesterday, I can&#39;t remember which one, write me a few stories. <br><br>A lot of it was pretty well written.<br><br>It had some weird quirks. Like one of the stories, I mentioned that there was music playing, and it repeated the same idea about the music like 7 times. <br><br>It would be like &quot;Jerry and Lisa were staring at each other across the dance hall. They gazed into each other&#39;s eyes, and the loud music faded away.&quot;<br><br>Me: &quot;Okay, write part 2, where Jerry and Lisa talk to each other&quot;<br><br>&quot;They got close to get each other&#39;s attention, and the loud music faded away&quot;<br><br>Something like that.<br><br>It also felt like it kept writing approximately the same length of each part, even when I explicitly asked it to write a small addition or a an extremely long addition. <br><br>

Legend

I don't know how erotic you want it, but online models don't really have limitations if you set them up the correct way. Just to test I asked for a short erotic love story using Jerry and Lisa and oh it was way too bad to fully share but here is a tame excerpt. I applied the censorship.
Spoiler for NSFW story:
He spanked her, each slap making her cry out louder, the sting adding to the overwhelming pleasure. Jerry&#39;s fingers found her c*, rubbing in fast circles.

I'm glad no one has tried to prompt engineer the machine yet. Her new upgraded brain is already a bit spicy and I worry it's only a matter of time before she says something she shouldn't.



Addressing your issue with repetition, did you try changing the temperature? A higher temperature tends to help a lot in my experience. Stops the repetition. 

the-pi-guy

#205
I tried deepseek, at least I think I did. Something seems very wrong, this doesn't seem right at all.

It's supposed to be an uncensored deepseek-r1:32b

It's so much worse at doing what I asked than the facebook model.

Someone's outside, and they're packing a car.

The facebook model would tell me a story about someone outside, packing a car. Deepseek is trying to tell me that I asked for a story set in the kitchen of packing all the delicates.

It's like simultaneously more detailed, and way more wrong at the same time.

I know it's one ultra specific example, but it's weird.

Going to try again, maybe it'll do better.

Quote from: Legend on Jan 28, 2025, 09:45 PMAddressing your issue with repetition, did you try changing the temperature? A higher temperature tends to help a lot in my experience. Stops the repetition.
I haven't yet.
Quote from: Legend on Jan 28, 2025, 09:45 PMI don't know how erotic you want it, but online models don't really have limitations if you set them up the correct way.
I know there's a way to do it, it just doesn't seem to be the default.



Legend

#206
I tried o3 mini high (what is up with these names).

Took much longer to give up, a full 8 minutes both times, yet still just as bad as o1. It also still just doesn't finish code, leaving the harder parts for future implementation.

edt: tried it on a different problem and it sucked exactly like o1 and only thought for a second. So no clue if it was my prompt or the model that made it go a full 8 minutes above.

the-pi-guy

I use Copilot a lot on Edge.

And I feel like it's 80%.

Sometimes it gives great answers. Most of the time, I feel like it gives answers that are 80% correct. I'll probably have to fix a number of things, but it gives a great starting point.

Every once in a while it feels incredibly broken though. Sometimes it'll repeat the exact same answer multiple times, after I've told it that the solution it gave doesn't work at all.

Legend

That's pretty cool. Do you give it hard questions, or is it simple stuff that it is 80% good at?

Brokenness makes sense. AI models are so dependent on context and if it starts off on the wrong foot, it's just going to stay that way. Also a lot of the time LLMs feel like they speed read my posts. If I have something unexpected but not shocking, they often ignore it. 

Legend

#209
OpenAI revealed their biggest model ever, GPT 4.5

Seems slightly behind grok 3 on benchmarks, so they must be pretty annoyed. They make this model sound huge and unprecedented; only the $200 monthly subscribers will get access for now because it is so big.