Quote from: Legend on Mar 14, 2026, 09:16 PMIs there a way to run two models in parallel so that the next token can be sampled using both their outputs? Like averaged or multiplied or whatever?Not that I know of.
Quote from: Legend on Mar 14, 2026, 10:22 PMHahaha I assume you watched it as an adult.I just watched it yesterday with the kids.
I love Incredibles 1 but I last watched it when i was young so I just mostly think "wow those sphere robots are cool! And the lava hideout was cool! And the kids were cool!"
Quote from: the-Pi-guy on Mar 14, 2026, 01:30 AMIncredibles 1 is such a great movie.Hahaha I assume you watched it as an adult.Spoiler for Hidden:The whole sequence with the computer showing all the dead supers, while Helen decides if she should call him, as well as the sequence afterwards with the plane being shot down are so great.
The line where Helen says "there are children aboard this plane" practically makes me tear up on the spot.
I don't think anything in the second movie comes close.


Quote from: the-Pi-guy on Mar 14, 2026, 08:45 PMLM Studio has so many options you can mess with.Is there a way to run two models in parallel so that the next token can be sampled using both their outputs? Like averaged or multiplied or whatever?
You can even set how the context window is managed (truncate middle, rolling).
Quote from: Legend on Mar 14, 2026, 05:52 PMDifferent sampling method/temperature?Most of the apps let you change the temperature, top K, top P.
Quote from: Legend on Mar 02, 2026, 09:40 PMAny word on game updates?
Quote from: the-Pi-guy on Mar 14, 2026, 05:35 PMFor some reason, running local models feels goofier than it should be. And I'm not sure why. Like for some reason, prompt adherence is worse when I'm using different Android app.Different sampling method/temperature?
Not sure if they're formatting the requests differently, or passing different default values for the model.
Page created in 0.081 seconds with 11 queries.