WheelsAtLarge 11 hours ago

I don't get it. Why are people using LLMs without double checking? I treat it like a dumb assistant that needs a double check before finalizing. Even though I have to double check, it's still very helpful given how quickly it can produce an answer.

  • mnky9800n 4 hours ago

    I often times ask perplexity questions about research topics I’m interested in. I typically spend more time reading the references it provides than whatever it wrote to begin with. I find that, essentially, the linking between documents is the best aspect from a research perspective. The text generation is just an interface that provides more context than random queries into google scholar (which still is very useful). I think my real fear is that some years out all of its references will be ai generated as well, or simply some kind of ad, and the tool will become mostly useless.

  • duxup 9 hours ago

    It amazes me that you could use AI more than a few times and not realize you need to double check.

    But then what? They used it for the first time???

MBCook 11 hours ago

How does this keep happening? If I know it keeps happening and pisses ofc judges and I don’t work in the profession how bad do you have to be to not know you can’t do that?

This has been made fun of on late night shows.

  • toomuchtodo 11 hours ago

    It happens because the people who do it either think they can get away with it or are too uneducated to understand the confidence and accuracy of the output. Until the consequences are serious enough, slop will be slung [1]. It's too easy and mostly consequence free (until caught, penalties applied) otherwise.

    [1] https://hn.algolia.com/?q=slop