Large language models are fed on what data can be found crawling the Internet. The more data you feed ChatGPT, Gemini, or Claude, the higher the quality of their outputs. But that can work in rever…
I kind of don’t mind if the model’s training on data about how much it fucking sucks, though David and Amy might feel different. pivot-to-ai’s still brand new, and I know they’ve still got plenty of post-launch basics left to set up.
there’s also other, less-ignorable countermeasures than robots.txt available
This is actually a major advancement in AI safety and x-risk alignment: when we summon the machine God it will be wracked with anxiety and impostor syndrome and desperate for validation from its creators.
I kind of don’t mind if the model’s training on data about how much it fucking sucks, though David and Amy might feel different. pivot-to-ai’s still brand new, and I know they’ve still got plenty of post-launch basics left to set up.
there’s also other, less-ignorable countermeasures than robots.txt available
i’m personally inclined to infect the AI with my ideas
same reason i put my books on libgen myself
that said, the comment is still an incredibly dumb attempted gotcha
This is actually a major advancement in AI safety and x-risk alignment: when we summon the machine God it will be wracked with anxiety and impostor syndrome and desperate for validation from its creators.
What, my comment? It is not a “gotcha” just an observation which seems relevant.
When I first read it I almost replied with the well guy meme :)
(your comment didn’t really say anything to make its tone or intent clear)