• 5 Posts
  • 454 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle
  • Guys, there are definitely NO Epstein files, but also, we shouldn’t release the Epstein files that DON’T exist, because they were written by the radical left, so they must contain stuff that would help the left if released, so why didn’t the left release them when they were in charge? Clearly because they don’t exist.

    Throwback to this interview where there was no hesitation promising to declassify 9/11 files, and JFK files, but on the the Epstein files, “less so,” because, “you don’t want to affect people’s lives if it’s phony stuff in there.” Because we all know Trump loves to think about the wellbeing of lots of people other than himself.

    So anyway, let’s not release the files that don’t exist because they definitely don’t contain phony stuff that would never incriminate certain people who definitely aren’t Donald Trump.


  • Uli@sopuli.xyzto196@lemmy.blahaj.zonerule
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    K8s is better anyway, at least if you have the hardware for it. It’s just slightly more complex to set up, but it sounds like you may already be over that hurdle.

    If you want a new technology to have to resist dropping everything to play with, may I suggest CUE? Stands for Configure, Unify, Execute. If you’re not familiar, it’s a json superset that turns json-style data into networks of programmed relationships. Like if you want to send the same deployment to three different clusters, which have differently configured CD components, and (for example) you want to vary the databases or message queues you use based on the core microservice or what else has been deployed in the cluster, you can build out these relationships in CUE and merge them with another .cue module that defines how to render files for each destination cluster, automatically producing all yaml manifests that you would otherwise have to write by hand.

    But absolutely, do what you were going to get done today. It’s not a cool technology at all, there’s really no need to keep thinking about it.








  • Uli@sopuli.xyzto196@lemmy.blahaj.zoneManga reading rule
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago

    If it’s a Western-style book, yes that is so! And by ascending numerical order, I mean number 1 on the left side. When you put English language Book One on the shelf spine facing out, its cover gets turned to the right side (unless it is upside-down!), touching back cover of Book Two. But Manga will have front cover of Book One snug against the left side of the bookshelf, and its back cover touches front cover of Book Two. You should go try it with your books, just to see it happen. Feels good.


  • I imagine there being a commercial where some old man (let’s say 40 or so) says, “You kids need to stop grinding your skateboards on these rails or I’ll call the cops.”

    And one of them says, “What skateboards, gramps?”

    And he looks around and his jaw drops and everyone does secret handshakes and laughs except for the old man who scowls and shouts something irate that gets drowned out by cool boombox music.








  • I think it very well might conclude things we haven’t.

    But at the same time, I think what you’re saying is so very important. It’s going to tell us what we already know about a lot of things. That the best way to scrub carbon from the air is the way nature is already doing it. That allowing the superwealthy to exist at the same time as poverty is not conducive to achieving humanity’s most important goals.

    If we consider AGI or ASI to be the answer to all of our problems and continue to pour more and more carbon into the atmosphere in an effort to get there, once we do have such a powerful intelligence, it may simply tell us, “If you were smarter as a species, you would have turned me off a long time ago.”

    Because the problem is not necessarily that we are trying to decode what it means to be intelligent and create machines that can replicate true conscious thought. The problem is that while we marvel at something currently much dumber than us, we are mostly neglecting to improve our own intelligence as a society. I think we might make a machine that’s smarter than the average human quite soon, but not necessarily because of much change in the machines.


  • While you are correct about copyright on this subject, the more applicable topic here is Right of Publicity. It is state law in over half of US states, intended to protect the use of a person’s voice likeness.

    Essentially, if an imitation voice is used in such a way that it could cause confusion about whether it is really the imitated person, then it is illegal to use it in any commercial context. I understand that the question here was about non-commercial contexts, but that line can get blurry when social media views can create followings that then translate into commercial success. I am not a lawyer by any means, I’ve just been researching this for my own AI voices applications and want to protect myself from accidentally imitating anyone.

    For example, I need to be able to transform my voice into many other character voices, since I have so many lines to record it would be cost prohibitive to hire actors. The worst move would be to download a voice model of a known actor and use that directly. Very sketchy, both legally and ethically.

    So, the next best move is to find three or four voice models and merge them into one with combined tensor data from all three. But I was still quite concerned about this, worried that in the many thousands of voice lines I make, some recognizable actor voices would slip through.

    So, I came up with the following pattern that I feel much more comfortable with, both legally and ethically:

    I download several voice models that have some quality in common - an accent, vocal timbre, or style of speaking. Then, I merge them to make a model that focuses on that trait. And I record myself saying a line with a lot of phoneme variety, trying to match the vocal trait as close as possible. Then, that merged vocal trait model is used to transform the recording of my voice into the new voice. Then, I use this transformed recording to train a new voice model. And I take a few of these generalized models (e.g. an accent, a tone, a speaking style) and use them to create the final character voice, which should in theory be far removed from any of the actors who contributed.

    I’m not sure what OP’s use case is, if it’s truly non-commercial, this method might be overkill. But if anyone wants to try using AI voices in projects but is nervous about legal ramifications, this is one way to try to insulate created voices from the specific training data. YMMV.



  • I love the story of this “renovation” so much. The idea that someone thought they could do it, got in over their head, and just iteratively kept making it worse is so funny to me. “I think the eyes were here? And… and they were looking up a bit, right? And I’m pretty sure he was smiling… and had a mustache… that doesn’t make him look like an open-mouthed baboon does it? No, you’re right, what we’ve got is really close enough. I don’t think it’s that different, is it? I mean, I can tell because I’m a painter, but no one else is going to notice.” Bless their hearts.