Veraticus@lib.lgbt to Technology@beehaw.orgEnglish · 1 year agoToday's Large Language Models are Essentially BS Machinesquandyfactory.comexternal-linkmessage-square134fedilinkarrow-up1151arrow-down11cross-posted to: technews@radiation.partyhackernews@derp.foo
arrow-up1150arrow-down1external-linkToday's Large Language Models are Essentially BS Machinesquandyfactory.comVeraticus@lib.lgbt to Technology@beehaw.orgEnglish · 1 year agomessage-square134fedilinkcross-posted to: technews@radiation.partyhackernews@derp.foo
minus-squareVeraticus@lib.lgbtOPlinkfedilinkEnglisharrow-up1·1 year agoOr you’ve simply misunderstood what I’ve said despite your two decades of experience and education. If you train a model on a bad dataset, will it give you correct data? If you ask a question a model it doesn’t have enough data to be confident about an answer, will it still confidently give you a correct answer? And, more importantly, is it trained to offer CORRECT data, or is it trained to return words regardless of whether or not that data is correct? I mean, it’s like you haven’t even thought about this.
Or you’ve simply misunderstood what I’ve said despite your two decades of experience and education.
If you train a model on a bad dataset, will it give you correct data?
If you ask a question a model it doesn’t have enough data to be confident about an answer, will it still confidently give you a correct answer?
And, more importantly, is it trained to offer CORRECT data, or is it trained to return words regardless of whether or not that data is correct?
I mean, it’s like you haven’t even thought about this.