Improving generative AI needs you to feed it as much original content as possible. But fresh training data is finite, and many websites are now blocking scrapers. So the stupidest AI companies are …
Oh that’s beautiful. This is at the crux of generative AI’s disruptive potential: you can never tell for sure if it’s AI. At least theoretically. For most meaningful tasks its output is often dubious. But for the mind rotting stuff done to train the models, there’s no way they can tell. Unless they monitor their microtaskers. But proctoring is no trivial task. Considering the pittance they’re paying microtaskers, I doubt any form of effective proctoring is justifiable.
In the end Saltman will be the main victim of the disruption it hoped to unleash at large.
Oh that’s beautiful. This is at the crux of generative AI’s disruptive potential: you can never tell for sure if it’s AI. At least theoretically. For most meaningful tasks its output is often dubious. But for the mind rotting stuff done to train the models, there’s no way they can tell. Unless they monitor their microtaskers. But proctoring is no trivial task. Considering the pittance they’re paying microtaskers, I doubt any form of effective proctoring is justifiable.
In the end Saltman will be the main victim of the disruption it hoped to unleash at large.