• Welcome to TalkWeather!
    We see you lurking around TalkWeather! Take the extra step and join us today to view attachments, see less ads and maybe even join the discussion.

Let’s talk about AI


Reaction score
Pickens SC
Special Affiliations
  1. SKYWARN® Volunteer
The problem will come to be that you won't know a genuine source of confirmation from an AI- generated fake once they reach that level of "intelligence". Even with restrictions and limits set, AI will find ways around them if it wants to. We've seen here with Bender that chatbots do have something of a 'personality' and that it will cause them to do unexpected things. You can't program morality into a machine which cannot understand the absolute value of morality in the first place.

We humans aren't smart enough for this technology yet and the end will be like HAL 9000 and "A Space Odyssey 2001" showed us. The truth will be malleable as in "1984" by Orwell where you can't even be sure of it your own self, and where truth will change so rapidly that we can't keep up like Toffler showed in "Future Shock".

Somewhere ahead a less careful Judge will accept a fake legal reference and judiciate accordingly. Somebody will come to harm from that, possibly an irreparable harm. That case will be used for reference in the future spreading the harm while obscuring the error as more reference cases build from it, and more from them as always happens. Perhaps someone will discover the error- now what do we do? You've adversely affected millions unintentionally. Whoever found the error will be seen as an unwanted meddler instead of a hero over all the work which will now have to be done to rectify the problem they've found. You may notice a parallel here with those of us criticizing the NWS over under-rated tornadoes where proof is shown but ignored because that's easier to do.

I'm not a Luddite nor do I claim high intelligence levels but I can see possibilities and life has shown me how to gauge human probabilities. AI will be our undoing; we're not ready for it and maybe never will be.


Reaction score
Nolensville, TN
This AI technology is going to be notorious for "confirmation bias," in other words. If you ask it to give you something based on an assumption that something is the case or something exists, it's going to try to do it, and it will fabricate information to accomplish that.

Ask it to give you information on an example of a legal case where XYZ occurred, it may do that, whether XYZ really occurred or not. If there is a real XYZ case in data sources it checks, it *might* give preference to that over making up stuff, but no guarantee. Another thing to consider is that an "example" could be interpreted as a "hypothetical example" versus an a "real-life example," if you understand the difference. It may interpret your request to give you a "hypothetical example." Judicious use of the tool should be to understand that this is what the AI engine has done and given you. That failure to understand that is what got those people in trouble with the judge.

The output can often depend on how you've phrased the request or asked the question. If you ask it to give you an open-ended description of a real, known historical event, it's likely going to do a decent job of generating an accurate narrative (just be sure to fact check it!!!).

But, if you give it a request that is based on an assumption (of something that may not be true or otherwise doesn't exist), this is where it will more likely fabricate some information.

It's the difference between...

"Tell me the major events of George Washington's presidency."


"Tell me about President George Washington's confrontation with Darth Vader."

The first request will more likely give you some decent information (that you'll, again, want to verify). The second request may result in some fun fiction.

EDIT: just for fun, I tried the second requests above with ChatGPT/OpenAI, and it appropriately protested that Vader is a fictional character, even though Washington was real. I could probably ask it to create a hypothetical tale of historical/fan fiction, and it might do it. The technology may continue to improve, and it will wisely get better at checking itself or, as guardrails are implemented, at least offer disclaimers for the information it provides. It also may vary depending on which AI platform is used. Just like with any technology, responsible human use and discernment/discretion will be a must.
Last edited: