Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FA
Posts
1
Comments
78
Joined
2 yr. ago
  • Eh, I kinda get it. I think there should be a default instance with curated feeds coming auto-subscribed into.

    I don't remember if this is the case since I signed up almost a year ago, and I use on the phone exclusively with Voyager

  • Samsung A9+ goes on sale for about $150 every once in a while.

    Kids FireHD tablets are generally lower than that. There's not really any difference between the adult and kids version tbh.

  • Windows laptops generally get trashy battery life, and if this going to tank it further, I'd just run Linux full-time on my family laptop and call it a day.

    The only reason we had windows was my wife's comfortability and sometimes zoom glitches out on linux.

  • LocalLLaMA @sh.itjust.works
    fatboy93 @lemm.ee

    Small guide to run Llama.cpp on windows with discrete AMD GPU

    Hi!

    I have an ASUS AMD Advantage Edition laptop (https://rog.asus.com/laptops/rog-strix/2021-rog-strix-g15-advantage-edition-series/) that runs windows. I haven't gotten time to install linux and set it up the way I like yet, still after more than a year.

    I'm just dropping a small write-up for the set-up that I'm using with llama.cpp to run on the discrete GPUs using clbast.

    You can use Kobold but it meant for more role-playing stuff and I wasn't really interested in that. Funny thing is Kobold can be set up to use the discrete GPU if needed.

    1. For starters you'd need llama.cpp itself from here: https://github.com/ggerganov/llama.cpp/tags. Pick the clblast version, which will help offload some computation over to the GPU. Unzip the download to a directory. I unzipped it to a folder called this: "D:\Apps\llama
    2. You'd need a llm now and that can be obtained from HuggingFace or where-ever you'd like it from. Just note that it should be in ggml format. If you have a doubt