AI General Thread

Started by Legend, Dec 05, 2022, 04:35 AM

previous topic - next topic

0 Members and 1 Guest are viewing this topic.

Go Down

Legend

omg,.. i'm probably late to the party but i just saw these:







holy crud,.. i know it's early but this looks absolutely terrible.  horrendously terrible.  you couldn't pay me enough to play this crud.
First two are the inverse of this:



ChatGPT is very bland by default just like text 2 image models. It needs to be told/tricked into doing the cool things, hence why I thought it sucked at first.


the-pi-guy


Legend


You beat me to it!

It's interesting how a lot of tech people kinda dismiss chatgpt until they spend more time with it. 90% of what it makes just feels like a good understanding of grammar, and then it does something that really wows you.

With both this and other new models, I predict all casual human creativity will be done better by AI. The tech is advancing so fast and just needs a few more years.

I think the curve will stop before full human creativity is solved though. These types of AI are very good at doing anything a human could imagine/preview in their head, but I think they'll never be able to solve more complicated problems that take years to conceptualize. For that I think we'll need new AI breakthroughs.

I guess here are some exact predictions.

Text Models
Turing Test-it'll be passed within a few years. Easy if testing is only a few hours long and can be repeated over and over until it gets lucky.

Conlang Test-the ability to create a new language with proper history and use said language without failing. I don't think this type of AI will ever be able to do this. It can already get pretty dang close, but an incredible amount of details need to be solved before presenting a single word.

Image Models
Newspaper Test-the ability to make a picture of a newspaper with correct writing on it. Easy to solve if a text model is built in, but very hard with a pure image model. I think it'll take some time but pass eventually.

Jigsaw Test-the ability to render a pile of a thousand piece jigsaw puzzle. Every piece needs to be unique and have the correct design to make the puzzle work. The image on the puzzle needs to be another pile of a jigsaw puzzle, also following the same rules. I doubt this test will ever be passed by the current type of AI.

kitler53

i mean,.. my thought still has this is blantant plagiarism.   

okay, youtube guy is right that many of today's art is basically also plagiarism but humans know the distinct difference between when they recycle/rephrase other peoples work fit to purpose and when new work needs to be created.

it's like that MS example of planning a 5 day vacation to wherever.   the AI didn't figure anything out,.. it found human made sources and plagiarized them.  so horray!! AI gave me the exact answer in the exact way i wanted it without needing to go to any of the original sources!!    ...expect now these original sources can no longer make money so they no longer get made.   

i guess i just don't think this is a rosy situation.   all the money going to the skimmers and not the creators.  it ensures the only creating content for the AI to plagiarize has a motive:  
1. as the owner of a thing (like a restaurant) i want to "train" the AI to say this or that about my thing.
2. as an bumb i want to spread disinformation.

i don't think the technology is inherently evil or anything.  far from it.  all tech has good and bad aspects of it.   i do think that unregulated specifically with respect to content creator rights this will end badly.   these types of AI should not be allowed to "train" on content found on the internet without paying those content owners for their work.   i think that's the thing needed to keep this boat afloat instead of capsizing.  


Featured Artist: Vanessa Hudgens

Legend

i mean,.. my thought still has this is blantant plagiarism.  

okay, youtube guy is right that many of today's art is basically also plagiarism but humans know the distinct difference between when they recycle/rephrase other peoples work fit to purpose and when new work needs to be created.

it's like that MS example of planning a 5 day vacation to wherever.   the AI didn't figure anything out,.. it found human made sources and plagiarized them.  so horray!! AI gave me the exact answer in the exact way i wanted it without needing to go to any of the original sources!!    ...expect now these original sources can no longer make money so they no longer get made.  
It'd be literally impossible for it to exclusively plagiarize like that.

Simple example is math. It's very easy to guarantee that the model never saw both of these numbers together in any of its training data, yet it can synthesize the correct answer with just its understanding of text.
"What is 59469464+38274209?"
"97743673"

The travel plans suck but you can try them with small random cities too. I found an alphabetical list of cities on wikipedia and picked one by random.
"Write me a vacation plan for Starodub Russia."
"Day 1:

Arrival in Starodub and check-in to the hotel
Explore the city and its local markets
Visit the Starodub Museum of Local Lore
Enjoy dinner at a local restaurant
Day 2:

Take a guided tour of the Starodub Fortress, a historic monument
Visit the nearby village of Lopatino, known for its traditional wooden architecture
Explore the Starodub Woods and take a hike to appreciate the local flora and fauna
Have dinner at a local tavern and try the famous Russian cuisine
Day 3:

Visit the St. George Monastery and learn about the history of Orthodox Christianity in the region
Take a relaxing stroll in the Starodub Central Park
Try your hand at traditional Russian crafts at the local crafts market
Enjoy a farewell dinner at a local restaurant
Day 4:

Departure from Starodub
Note: The itinerary can be customized based on your interests and preferences. Make sure to book your activities and accommodations in advance."

The AI made this sucky travel plan all on its own off the same primary sources most vacation planners use.


i guess i just don't think this is a rosy situation.   all the money going to the skimmers and not the creators.  it ensures the only creating content for the AI to plagiarize has a motive:  
1. as the owner of a thing (like a restaurant) i want to "train" the AI to say this or that about my thing.
2. as an bumb i want to spread disinformation.

i don't think the technology is inherently evil or anything.  far from it.  all tech has good and bad aspects of it.   i do think that unregulated specifically with respect to content creator rights this will end badly.   these types of AI should not be allowed to "train" on content found on the internet without paying those content owners for their work.   i think that's the thing needed to keep this boat afloat instead of capsizing.  
Do you consider it different than wikipedia and other websites that rehost information from primary sources?

I think the only legal solution is to make people responsible for what they do with created work, treating it like every other tool. Currently Google and bing are legally allowed to read websites and learn about them yet they'd still get in trouble if they rehosted too much. Why does AI need a different framework?

kitler53

Do you consider it different than wikipedia and other websites that rehost information from primary sources?

I think the only legal solution is to make people responsible for what they do with created work, treating it like every other tool. Currently Google and bing are legally allowed to read websites and learn about them yet they'd still get in trouble if they rehosted too much. Why does AI need a different framework?
i absolutely think it's a problem in news journalism.  

i can't find any source on it right now but... google news used to skim the article and give a 1 sentence summary of the contents.   they had to remove the feature because there was measurable evidence that by having a 1 sentence summary users were like 90% less likely to click on the actual article and thus the ad revenue of those site massively declined.  

sound a bit like chatGPT?   ...sound exactly like that to me.


chatGPT just obscures the "rehosting" that is going on.  it's still taking information other created and then profiting off of it at the expense of the original content creators.   


Featured Artist: Vanessa Hudgens

Legend



Early in the video he had a really good description of why chatgpt can be so unimpressive until you spend some time with it.

It's really interesting how prompt engineering has become such a major element of both this and text2image. Will these user end modifiers become more and more important, essentially programing for AIs? I think the short term future needs to be an all encompassing single model trained on as much multimodal data as possible. 99.999% of its output can be pure garbage yet with the right user tweaks it could be incredible.

kitler53

I'm pretty sure I wrote a post somewhere talking about how expensive this tech would be to include in my product. 

a lot of it is how dumb of a use case it would be (search) but there was so much needed to be done in laying out where data is and what it means and what we want in response.  the amount of supporting data was incredible to get a halfway decent result.


Featured Artist: Vanessa Hudgens

Legend

I'm pretty sure I wrote a post somewhere talking about how expensive this tech would be to include in my product.

a lot of it is how dumb of a use case it would be (search) but there was so much needed to be done in laying out where data is and what it means and what we want in response.  the amount of supporting data was incredible to get a halfway decent result.
Yeah there need to be generic "office assistant" or whatever plugins that focus the data and get you 90% there.

Once there's a version of chatgpt that can run locally, so much is possible.

Legend

GPT-4 is supposedly set to release this week. GPT-3 is 3 years old now and powers ChatGPT so this should be a huge upgrade.

Or it could be a disappointing joke. These companies often misapply AI ethics and sabotage their models. It's painfully clear from ChatGPT and text2image that it's wrong for us to think of AI like a human that has a personality/morals. It's just a prediction machine and an accurate prediction machine needs to be able to sometimes predict a lot of horrible stuff.

For safety/ethics we need to define what types of predictions we're looking for, not teach the model that only these predictions are possible.

Legend

Phones have been using AI post processing for a while, but it's funny how obvious it is with this Moon example: Reddit - Dive into anything

(person puts a blurry photo of the Moon on their computer, the phone still makes it look like an ideal Moon)

the-pi-guy

Phones have been using AI post processing for a while, but it's funny how obvious it is with this Moon example: Reddit - Dive into anything

(person puts a blurry photo of the Moon on their computer, the phone still makes it look like an ideal Moon)
For some reason I find this amazing.  

I'm not laughing out loud, but I find that hilarious.

Legend


Legend


Legend

Both text and images

Go Up