OpenAI transcribed over a million hours of YouTube videos to train GPT-4
原文英文,约600词,阅读约需3分钟。发表于: 。Cath Virginia / The Verge | Photos from Getty Images Earlier this week, The Wall Street Journal reported that AI companies were running into a wall when it comes to gathering high-quality training data. Today, The New York Times detailed some of the ways companies have dealt with this. Unsurprisingly, it involves doing things that fall into the hazy gray area of AI copyright law. The story opens on OpenAI which, desperate for training data, reportedly developed its Whisper audio transcription model to get over the hump, transcribing over a million hours of YouTube videos to train GPT-4, its most advanced large language model. That’s according to The New York Times, which reports that the company knew this was legally questionable but believed it to be fair use. OpenAI president Greg... Continue reading…
AI公司在寻找高质量的训练数据方面遇到困难,导致它们不得不采取可疑的做法。OpenAI为了训练其语言模型GPT-4,转录了超过一百万小时的YouTube视频,尽管知道这在法律上是有问题的。Google也从YouTube获取了转录,但这两家公司都面临法律和技术限制。Meta考虑支付书籍许可费或购买出版商以获取训练数据。AI训练领域正面临数据短缺的问题,公司可能在2028年之前超过新内容。