• 1 Post
  • 36 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle








  • I think this particular shot is a bit of a non-story, honestly. From what I read in the guardians article, they composited in a falling building. The footage of the building collapsing was itself generated by ai (I’m guessing veo3).

    The generation of these kind of short, repetitive assets via ai is probably the ideal use case, aside from compositing.

    It’s low cost, low on energy resources, and frees up the VFX team to work in other areas.

    I get that the stupid end goal is to replace actors, cameras, sets, and the like, consolidating all of the creative output into one industrialized pipeline, but that’s capitalism at work, not technology. The movie industry has been eating artist alive for decades. A creative team using tools at their disposal in a reasonable, grounded way are not the enemy. Subscription services might be.






  • Really? I mean, it’s melodramatic, but if you went throughout time and asked writers and intellectuals if a machine could write poetry, solve mathmatical equations, and radicalize people effectively t enough to cause a minor mental health crisis, I think they’d be pretty surprised.

    LLMs do expose something about intelligence, which is that much of what we recognize as intelligence and reason can be distilled from sufficiently large quantities of natural language. Not perfectly, but isn’t it just the slightest bit revealing?


  • A child may hallucinate, lie, misunderstand, etc, but we wouldn’t say the foundations of a complete adult are not there, and we wouldn’t assess the child as not conscious. I’m not saying that LLMs are conscious because they say so (they can be made to say anything), but rather that it’s difficult to be confident that humans possess some special spice of consciousness that LLMs do not, because we can also be convinced to say anything.

    LLMs can reason (somewhat unreliably) with a fraction of a human brains compute power while running on hardware that was made for graphics processing. Maybe they are conscious, but only in some pathetically small way, which will only become evident when they scale up, like a child.



  • Why can’t complex algorithms be conscious? In fact, ai can be directed to reason about themselves, context can be made to be persistent, and we can measure activation parameters showing that they are doing so.

    I’m sort of playing devil’s advocate here, but, “Consciousness requires contemplation of self. Which requires the ability to contemplate.” Is subjective, and nearly any ai model, even rudimentary ones, are capable of insisting that they contemplate themselves.