I love that they’re playing hard ball like this and not messing around.Looks like no more tiktok promotion etc for Barbie this weekend if social media influencers want to join SAG one day.
SAG-AFTRA Can Bar Non-Member Social Media Influencers From Guild Admission If They Promote For Studios During Strike
SAG-AFTRA Can Bar Non-Member Social Media Influencers From Guild Admission If They Promote For Studios During Strikedeadline.com
And even if a social media influencer isn’t getting paid to push on social media for a studio/streamer, it’s best not to do so, even as a fan. “Influencers should refrain from posting on social media about any struck work regardless of whether they are posting organically or in a paid capacity,” reads Sagaftrastrike.org.
That also goes for Comic-Con, which attracts several social media influencers. They can’t promote for companies that SAG-AFTRA is striking against including “appearances, panels, fan meet and greets, etc. involving struck work.”
Heads up, Per Variety, reviews are still allowed. It moreso refers to pure promotion/puff pieces (the lines between the two have admittedly been blurred).I love that they’re playing hard ball like this and not messing around.
With that said, RIP to any Chris Stuckmann or Dan Murrell reviews for the foreseeable future. Two of my favorites.
Late to this, but it’s not like we’re all up in arms that our cars are built by robots instead of people turning wrenches.All of it has been because big corporations suck. Would you like it if there was a job taken from you because the company decided it be better if they used AI?
So Disney already. But without writers.It will never be able to replace writers because LLMs and other generative algorithms cannot truly create, only rehash content.
This is where the focus needs to be on. Not “corporations are bad”. But the fact the quality of new storytelling will be greatly diminished. And it comes down to us, the consumer, supporting those stories.AI Bots also don't have human experiences to draw from, so that would just make everything even more generic than it already is.
Not really…this is still in its infancy. The problem with AI is that its current iteration is basically a nextgen search engine, without the intuition to know what’s real and what’s not. Non-trial attorneys using ChatGPT for example have already been disbarred because the engine provided them dummy cases to use as evidence toward common law. If we can get to a future where all the crap from the internet/intelligence databases gets scrubbed, then sure…but we’re not close.Some industries are done- like non trial attorneys. It’s over for them. 100%. Writing is on the wall and nothing you can do.
I appreciate the dialog on this.Not really…this is still in its infancy. The problem with AI is that its current iteration is basically a nextgen search engine, without the intuition to know what’s real and what’s not. Non-trial attorneys using ChatGPT for example have already been disbarred because the engine provided them dummy cases to use as evidence toward common law. If we can get to a future where all the crap from the internet/intelligence databases gets scrubbed, then sure…but we’re not close.
It might give you bad advice or improperly fill out something, though, at which point you have little recourse as there is no real "attorney" to file malpractice against, just an algorithm that has the illusion of intellect.I appreciate the dialog on this.
Obviously I don’t know all the ins and outs of the tech which is way above my pay grade. But I’m sure finding proper case law from a set number of confirmed websites can be learned.
And from what I’ve read it more about the layman needing an attorney than an actual attorney. Me using an AI “attorney” vs an actual one isn’t going to get me disbarred, for example.
Late to this, but it’s not like we’re all up in arms that our cars are built by robots instead of people turning wrenches.
Technological revolutions happen all the time. Some with great results and some with bitter sweet ones.
The cotton gin extending slavery in the south, for example, as cotton was more viable again.
AI is certainly in the latter. It also comes down to global competition. If other countries use AI, and they will. And we don’t due to morals, then the US is bypassed and (name your country) is now the primary influencer for global policy change.
So Disney already. But without writers.
This is where the focus needs to be on. Not “corporations are bad”. But the fact the quality of new storytelling will be greatly diminished. And it comes down to us, the consumer, supporting those stories.
Some industries are done- like non trial attorneys. It’s over for them. 100%. Writing is on the wall and nothing you can do.
But writers should exist because they and only they can create new experiences. I feel like this should be single focused on creativity and quality and not “corporations are bad”. This isn’t the first nor will it be the last revolution that relegates occupations irrelevant.
Just wait until 3.5 million truckers (almost all men) are obsolete.
YeahIt kind of reminds me a little bit on how bands and studios were really mad about Napster and mp3 files in 2000. They tried really hard to stop mp3s.
But, then YouTube showed up and itunes. And music was never the same ever again. It never went back.
They can try to stop AI. But I don't see how it would be fully stopped. And United States might stop it, but what happens when other countries use AI to do movies with American actors. Take their likeness.
I think we get Tom cruise to get the key and shut it down.It kind of reminds me a little bit on how bands and studios were really mad about Napster and mp3 files in 2000. They tried really hard to stop mp3s.
But, then YouTube showed up and itunes. And music was never the same ever again. It never went back.
They can try to stop AI. But I don't see how it would be fully stopped. And United States might stop it, but what happens when other countries use AI to do movies with American actors. Take their likeness.
This has been my experience with STEM LLMs too. You can correct them and they'll spit out something slightly more correct but still wrong.As someone who dabbles in legal copy and has been asked to use high-end, industry-grade AI to assist with that work, I can tell you it is terrible for legal language or anything that requires nuance. It spits out very confident statements that are in some manner wrong about 60% of the time, no matter how specific your prompts.
My experience has been that this is the case for anything that would fall within the knowledge domain of the people working on it, like coding. Asking it anything else just makes it spit out generic garbage.This has been my experience with STEM LLMs too. You can correct them and they'll spit out something slightly more correct but still wrong.