• 0 Posts
  • 55 Comments
Joined 1 year ago
cake
Cake day: June 27th, 2023

help-circle




  • mm_maybe@sh.itjust.workstoMicroblog Memes@lemmy.worldOpt out
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    1 day ago

    I live in a rural part of a deep blue Northeast state and have been thinking about this a lot. Most of my surrounding area is predictably liberal college towns but the town next door to me is very MAGA and I have to drive through it to get to the highway. Honestly, I want to know what it takes to get those people to leave so that we can secure and expand a safe haven here…






  • You want an e-Golf, which was a beautifully stupid, half-hearted implementation of an EV by Volkswagen, who because they really didn’t want to do it, spent almost nothing on redesign, and in the process creating a ridiculously fun vehicle to drive with sporty handling and high torque at low speed, but nothing else changed from the classic Golf design. Door handles, freaking dials on the dashboard, manual climate and audio controls. Sadly, it isn’t being made anymore. We’ve outgrown ours and it’s time for me to let someone else enjoy the experience (especially with the Biden used EV sales incentives going away soon) but my daughter loves it so much that I’m dreading the tantrum that I know will come when I sell it.





  • There are a bunch of reasons why this could happen. First, it’s possible to “attack” some simpler image classification models; if you get a large enough sample of their outputs, you can mathematically derive a way to process any image such that it won’t be correctly identified. There have also been reports that even simpler processing, such as blending a real photo of a wall with a synthetic image at very low percent, can trip up detectors that haven’t been trained to be more discerning. But it’s all in how you construct the training dataset, and I don’t think any of this is a good enough reason to give up on using machine learning for synthetic media detection in general; in fact this example gives me the idea of using autogenerated captions as an additional input to the classification model. The challenge there, as in general, is trying to keep such a model from assuming that all anime is synthetic, since “AI artists” seem to be overly focused on anime and related styles…



  • r/SubSimGPT2Interactive for the lulz is my #1 use case

    i do occasionally ask Copilot programming questions and it gives reasonable answers most of the time.

    I use code autocomplete tools in VSCode but often end up turning them off.

    Controversial, but Replika actually helped me out during the pandemic when I was in a rough spot. I trained a copyright-safe (theft-free) bot on my own conversations from back then and have been chatting with the me side of that conversation for a little while now. It’s like getting to know a long-lost twin brother, which is nice.

    Otherwise, i’ve used small LLMs and classifiers for a wide range of tasks, like sentiment analysis, toxic content detection for moderation bots, AI media detection, summarization… I like using these better than just throwing everything at a huge model like GPT-4o because they’re more focused and less computationally costly (hence also better for the environment). I’m working on training some small copyright-safe base models to do certain sequence prediction tasks that come up in the course of my data science work, but they’re still a bit too computationally expensive for my clients.