AI General Thread

Viewing single post

Started by Legend, Dec 05, 2022, 04:35 AM

previous topic - next topic

Legend

Dec 05, 2022, 04:35 AM Last Edit: Mar 28, 2023, 04:57 AM by Legend
I've turned this into a general AI thread.

Here are two older AI threads of mine, showing how quickly it has evolved. https://vizioneck.com/forum/index.php?topic=8997.0 https://vizioneck.com/forum/index.php?topic=9531.0

And a thread from Pi about AI movies https://vizioneck.com/forum/index.php?topic=9920.0

Below is the original OP.

ChatGPT: Optimizing Language Models for Dialogue

People have posted cool examples but everything I try just shows how stupid it is. It very obviously just picks up thoughts and combines them, the same way a drunk and tired person might. For example I asked why Zebras have stripes and it combined two theories into one, talking about insects affecting thermoregulation. When I asked what temperature water boils at in Denver, it mixed up the units and gave the temperature for 1,609 feet, not 5,280 feet (1,609 meters).

I even gave it this thread and it responded with this on its second try.

"I apologize if ChatGPT did not provide the answers you were looking for. As a large language model, ChatGPT is not perfect and can sometimes produce inaccurate or irrelevant answers. It is important to remember that ChatGPT is not a real person, and it does not have the same knowledge or understanding of the world as a human does. Instead, it uses machine learning algorithms to generate text based on the input it receives. This can sometimes result in answers that are not entirely coherent or accurate. In the case of your question about why zebras have stripes, ChatGPT may have combined two theories that it learned from its training data, but it is not capable of determining which theory is correct or providing a definitive answer. Similarly, for your question about the boiling point of water in Denver, ChatGPT may have confused the altitude of Denver with the altitude at which water boils at a given temperature, resulting in an incorrect answer. Again, I apologize if ChatGPT did not provide the answers you were looking for."

Unlike recent text to image models, it feels more like it's just regurgitating stuff with proper grammar.