Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)D
Posts
3
Comments
42
Joined
2 mo. ago

  • The Fediverse is one of the precious few bastions where real talk can happen without algorithmic shaping and interference. News and politics are a fundamental part of society, and inseparable from real discussion. I disagree with the idea that to make the Fediverse better, we have to sacrifice these forms of discussion in favor of "anything else".

    Your call for stopping, slowing down, or posting literally anything else is inadvertently also a call for self-censorship in service of your personal ideal. You saying that this is the answer to the problem of attracting new membership is you expressing your own preferences and applying them broadly, and isn't borne out by fact. People are not avoiding any of the major social media platforms due to these things, and it seems unlikely they are avoiding the Fediverse for this reason either.

    The Fediverse's lower membership is likely more of a complicated problem involving things like a broad lack of awareness of it, and the average person being put off by the technical-seeming complexity of it, which makes it appear less accessible. They are also reluctant to step outside of their existing communities, which is exacerbated by the fact that those communities tend to settle into those platforms that appear easier and more familiar.

    Bottom line is, I respect your right to your opinions and your right to engage with the Fediverse according to your own needs, wants, and perspectives. I however strongly disagree with your call for community-wide self censorship in the name of filling the Fediverse with positivity at the expense of real talk under the premise of attracting new membership.

    You're more than welcome to spread as much positivity as you want wherever you want, and to distance yourself from anything you don't personally favor. By all means start a community, encourage others to start communities based on your preferences. But calls for self-censorship on the Fediverse are problematic at best, especially given the circumstances we are currently living in.

  • Awesome work. And I agree that we can have good and responsible AI (and other tech) if we start seeing it for what it is and isn't, and actually being serious about addressing its problems and limitations. It's projects like yours that can demonstrate pathways toward achieving better AI.

  • The material might seem a bit dense and technical, but it presents concepts which may be critical to include in conversations around AI safety, and safety conversations are among the most important we should be having.

  • Technology @lemmy.world

    Understanding Why LLM's Choose To Behave Badly

    arxiv.org /abs/2601.08673
  • This is a subject that people (understandably) have strong opinions on. Debates get heated sometimes and yes, some individuals go on the attack. I never post anything with the expectation that no one is going to have bad feelings about it and everyone is just going to hold hands and sing a song.

    There are hard conversations that need to be had regardless. All sides of an argument need to be open enough to have it and not just retreat to their own cushy little safe zones. This is the Fediverse, FFS.

  • I have never once said that AI is bad. Literally everything I've argued pertains to the ethics and application of AI. It's reductive to call all arguments critical of how AI is being implemented "AI bad".

    It's not even about it being disruptive, though I do think discussions about that are absolutely warranted. Experts have pointed to potentially catastrophic "disruptions" if AI isn't dealt with responsibly, and we are currently anything but responsible in our handling of it. It's unregulated, running rampant and free everywhere claiming to be all things for all people, leaving a mass of problems in its wake.

    If a specific individual or company is committed to behaving ethically, I'm not condemning them. A major point to understand is that those small, ethical actors are the extreme minority. The major players, like those you mentioned, are titans. The problems they create are real.

  • Not all problems may be cured immediately. Battles are rarely won with a single attack. A good thing is not the same as nothing.

  • He's jumping ship because it's destroying his ability to eke out a living. The problem isn't a small one, what's happening to him isn't a limited case.

  • I agree with you that there can be value in "showing people that views outside of their likeminded bubble[s] exist". And you can't change everyone's mind, but I think it's a bit cynical to assume you can't change anyone's mind.

  • From what I've heard, the influx of AI data is one of the reasons actual human data is becoming increasingly sought after. AI training AI has the potential to become a sort of digital inbreeding that suffers in areas like originality and other ineffable human qualities that AI still hasn't quite mastered.

    I've also heard that this particular approach to poisoning AI is newer and thought to be quite effective, though I can't personally speak to its efficacy.

  • "Public" is a tricky term. At this point everything is being treated as public by LLM developers. Maybe not you specifically, but a lot of people aren't happy with how their data is being used to train AI.

  • Is the only imaginable system for AI to exist one in which every website operator, or musician, artist, writer, etc has no say in how their data is used? Is it possible to have a more consensual arrangement?

    As far as the question about ethics, there is a lot of ground to cover on that. A lot of it is being discussed. I'll basically reiterate what I said that pertains to data rights. I believe they are pretty fundamental to human rights, for a lot of reasons. AI is killing open source, and claiming the whole of human experience for its own training purposes. I find that unethical.

  • I can't speak for everyone, but I'm absolutely glad to have good-faith discussions about these things. People have different points of view, and I certainly don't know everything. It's one of the reasons I post, for discussion. It's really unproductive to make blanket statements that try to end discussion before it starts.

  • I think you'd probably have to hide out under a rock to miss out on AI at this point. Not sure even that's enough. Good luck finding a regular rock and not a smart one these days.

  • AI companies could start, I don't know- maybe asking for permission to scrape a website's data for training? Or maybe try behaving more ethically in general? Perhaps then they might not risk people poisoning the data that they clearly didn't agree to being used for training?

  • Technology @lemmy.world

    A Project to Poison LLM Crawlers

    rnsaffn.com /poison3/
  • Your engagement on this issue is still clearly in bad faith. It reads like a common troll play where they attempt to draw a mark down a rabbit hole.

    Understand that I don't play these games. This is me leaving you to your checkerboard. Take care.

    [Edited for grammar and brevity]

  • A very nuanced and level-headed response, thank you.

  • I do agree with your point that we need to educate people on how to use AI in responsible ways. You also mention the cautious approach taken by your kids school, which sounds commendable.

    As far as the idea of preparing kids for an AI future in which employers might fire AI illiterate staff, this sounds to me more like a problem of preparing people to enter the workforce, which is generally what college and vocational courses are meant to handle. I doubt many of us would have any issue if they had approached AI education this way. This is very different than the current move to include it broadly in virtually all classrooms without consistent guidelines.

    (I believe I read the same post about the CEO, BTW. It sounds like the CEO's claim may likely have been AI-washing, misrepresenting the actual reason for firing them.)

    [Edit to emphasize that I believe any AI education we do to prepare for employment purposes should be approached as vocational education which is optional, confined to those specific relevant courses, rather than broadly applied]

  • While there are some linked sources, the author fails to specify what kind of AI is being discussed or how it is being used in the classroom.

    One of the important points is that there are no consistent standards or approaches toward AI in the classroom. There are almost as many variations as there are classrooms. It isn't reasonable to expect a comprehensive list of all of them, and it's neither the point nor the scope of the discussion.

    I welcome specific and informed counterarguments to anything presented in this discussion, I believe many of us would. I frankly find it ironic how lacking in "nuance or level-headed discussion" your own comment seems.

  • Technology @lemmy.world

    A generation taught not to think: AI in the classroom