4
Watched a demo of a new AI image generator at a tech meetup in Austin completely fail to render a simple request for a 'red bicycle'.
The model kept producing images of blue motorcycles, which was a pretty clear sign that its training data was fundamentally flawed, so has anyone else seen a basic prompt fail this badly with a supposedly advanced system?
2 comments
Log in to join the discussion
Log In2 Comments
the_wesley4d ago
Austin Tech Meetup last year had a similar demo fail. I used to believe these models just needed more data. Seeing a system confuse basic objects like that changed my view. It's not about having more pictures. The training data itself can have weird gaps or wrong links. Makes you question what else it's getting totally wrong under the hood.
2
daniel5524d ago
Honestly I see that demo fail differently. It's not always about bad training data. Sometimes the model just gets stuck on a weird path during creation. I've seen a system draw a blue motorcycle because it linked "red" to "fast" and "fast vehicle" to motorcycle in its own logic. These things are so complex that a single bad result doesn't prove the whole base is wrong. It just shows we're still figuring out how to guide them properly.
1