Spot the difference? It gets better because you have to do little more than throw more data at it, the AI figures out the rest. There is no human in loop that has to figure out what makes a picture a picture and teach the AI to draw, the AI learns that simply by example. And it doesn’t matter what data you throw at it. You can throw music at it and it’ll learn how to do music. You throw speech at it and it learns to talk. And so on. The more data you throw at it, the better it gets and we have only just started.
Everything you see today is little more than a proof of concept that shows that this actually works. Next few years we will be throwing ever more data at it, building multi-modal models that can do text/video/audio together, AI’s that can interact with the real world and so no. There is tons of room to improve simply by adding more and different data, without any big chances in the underlying algorithms.
you seriously thought reposting AI marketing horseshit we’ve seen before would do anything other than cost you your account? sora gives a shit result even when openai’s marketing department is fluffing it — it made so few changes to the source material it’s plagiarizing that a bunch of folks were able to find the original video clips. but I’m wasting my fucking time — you’re already dithering like a cryptobro between “this technology is already revolutionary” and “we’re still early”
it took me sifting through an incredible amount of OpenAI SEO bullshit and breathless articles repeating their marketing, but this article links to and summarizes some of that discussion in its latter paragraphs
bonus: in the process of digging up the above, I found this other article that does a much better job tearing into sora than I did — mostly because sora isn’t interesting at all to me (the result looks awful when you, like, look at it) and the claims that it has any understanding of physics or an internal world model are plainly laughable
same! which is why it’s maddening that I almost gave up on finding it — I had to reach back all the way to when sora was announced to find even this criticism, because all of the articles I could find since then have been mindless fluff. even the recent shit talking about how the OpenAI CEO froze when asked where they got the videos to train sora on are mostly just mid journalists slobbering about how nobody does gotcha questions like that anymore. not one bothered to link to any critical analyses of what sora is or what OpenAI does. and the whole time this article I couldn’t find via search was just sitting in my tabs.
No, if you spend a few second searching for stock images of that bird you’ll quickly find out that they all look more or less the same. So naturally, SORA produces something that looks very similar as well.
It’a a magnificent giveaway though. “All the stock images of that bird look the same to me”. Yeah, I agree that you’re not personally capable of critically assessing the material here.
“it’s not plagiarism, the output is just indistinguishable from plagiarism” oh how foolish of me to not consider the same excuse undergrads use to try and launder the paper they plagiarized
they signed up here on the pretense that they’re an old r/SneerClub poster, but given how long they lasted before they started posting advertising for their machine god, I’m gonna assume they’re either yet another lost AI researcher come to dazzle us with unimpressive bullshit or a LWer trying to pull a fast one
Good thing that technology never ever improves…
why is this specific technology predestined to improve from its current, shitty state?
Spot the difference? It gets better because you have to do little more than throw more data at it, the AI figures out the rest. There is no human in loop that has to figure out what makes a picture a picture and teach the AI to draw, the AI learns that simply by example. And it doesn’t matter what data you throw at it. You can throw music at it and it’ll learn how to do music. You throw speech at it and it learns to talk. And so on. The more data you throw at it, the better it gets and we have only just started.
Everything you see today is little more than a proof of concept that shows that this actually works. Next few years we will be throwing ever more data at it, building multi-modal models that can do text/video/audio together, AI’s that can interact with the real world and so no. There is tons of room to improve simply by adding more and different data, without any big chances in the underlying algorithms.
you seriously thought reposting AI marketing horseshit we’ve seen before would do anything other than cost you your account? sora gives a shit result even when openai’s marketing department is fluffing it — it made so few changes to the source material it’s plagiarizing that a bunch of folks were able to find the original video clips. but I’m wasting my fucking time — you’re already dithering like a cryptobro between “this technology is already revolutionary” and “we’re still early”
now fuck off
Wait, for real? I missed this, do you have a source? I want to hear more about this lol
it took me sifting through an incredible amount of OpenAI SEO bullshit and breathless articles repeating their marketing, but this article links to and summarizes some of that discussion in its latter paragraphs
bonus: in the process of digging up the above, I found this other article that does a much better job tearing into sora than I did — mostly because sora isn’t interesting at all to me (the result looks awful when you, like, look at it) and the claims that it has any understanding of physics or an internal world model are plainly laughable
See now, there’s your problem, you’re not supposed to.
ah yes, this (BITM) was indeed one of my Opened Tabs and on my (extremely) long list of places to review for regular content
same! which is why it’s maddening that I almost gave up on finding it — I had to reach back all the way to when sora was announced to find even this criticism, because all of the articles I could find since then have been mindless fluff. even the recent shit talking about how the OpenAI CEO froze when asked where they got the videos to train sora on are mostly just mid journalists slobbering about how nobody does gotcha questions like that anymore. not one bothered to link to any critical analyses of what sora is or what OpenAI does. and the whole time this article I couldn’t find via search was just sitting in my tabs.
speaking of which deluge, I ran across this and plan to give it (or a derivation of it) a test ride this week: https://chitter.xyz/@faoluin/112100440986051887
Yeah, people found the original bird video on YouTube within a few hours. Could’ve been the others too but I was too busy at the time to track that l
I think it was also in the thread here at the time
No, if you spend a few second searching for stock images of that bird you’ll quickly find out that they all look more or less the same. So naturally, SORA produces something that looks very similar as well.
oh wow a fresh account with the exact same writing style and shit takes as the other poster, wonder who that could be
It’a a magnificent giveaway though. “All the stock images of that bird look the same to me”. Yeah, I agree that you’re not personally capable of critically assessing the material here.
“it’s not plagiarism, the output is just indistinguishable from plagiarism” oh how foolish of me to not consider the same excuse undergrads use to try and launder the paper they plagiarized
A Mystery for the Ages
in4 “well actually, Generative ML was discovered by Darwin”
stop saying ‘we’ unless you’re actually paid by these ghouls to work on this trash
they signed up here on the pretense that they’re an old r/SneerClub poster, but given how long they lasted before they started posting advertising for their machine god, I’m gonna assume they’re either yet another lost AI researcher come to dazzle us with unimpressive bullshit or a LWer trying to pull a fast one