I’m not sure either, Win 10/11 are pretty quick to get going and Ubuntu is not much longer than that. If I have to hard reset the mbp for work, it’s a nice block of slacker time :)
Behohippy
Gamer, rider, dev. Interested in anything AI.
- 7 Posts
- 23 Comments
Behohippy@lemmy.worldto
Games@lemmy.world•The Weekly 'What are you playing?' Discussion - 20-07-2023English
2·3 years agoHalls of Torment. $5 game on steam that is like a Vampire Survivors clone, but with more rpg elements to it.
Behohippy@lemmy.worldto
Selfhosted@lemmy.world•Intel is quitting on its adorable, powerful, and upgradable mini NUC computers
4·3 years agoThese are amazing. Dell, Lenovo and I think HP made these tiny things and they were so much easier to get than Pi’s during the shortage. Plus they’re incredibly fast in comparison.
I’ve got a background in deep learning and I still struggle to understand the attention mechanism. I know it’s a key/value store but I’m not sure what it’s doing to the tensor when it passes through different layers.
Behohippy@lemmy.worldto
Android@lemmy.ml•New android podcast "android faithful" to fill the hole the cancelled twit show, All about Android, left in our hearts.
4·3 years agoSubscribed. That last episode of AAA was heartbreaking.
Behohippy@lemmy.worldto
Singularity | Artificial Intelligence (ai), Technology & Futurology@lemmy.fmhy.ml•We just hit 1150 subscribers and 220 posts 🔥✊. If you are a lurker please help us grow the community by commenting and posting. ✌️English
2·3 years agoI’m on lemmy.world and the sidebar shows 401 subscribers. Is that just a sub count from the local instance or global?
Behohippy@lemmy.worldto
Singularity | Artificial Intelligence (ai), Technology & Futurology@lemmy.fmhy.ml•Microsoft LongNet: One BILLION Tokens LLM — David Shapiro ~ AI (06.07.2023)English
4·3 years agoAlso not sure how that would be helpful. If every prompt needs to rip through those tokens first, before predicting a response, it’ll be stupid slow. Even now with llama.cpp, it’s annoying when it pauses to do the context window shuffle thing.
Behohippy@lemmy.worldto
Rimworld@lemmy.world•Are there any Vanilla Expanded mods you avoid?English
1·3 years agoSame. I loved the idea of what VE does but playing the game was just a confusing mess for me. I stick to the same 8 mods I always use.
Behohippy@lemmy.worldto
Technology@lemmy.ml•Microsoft Releases 1.3 Bn Parameter Language Model, Outperforms LLaMaEnglish
1·3 years agoBad article title. This is the “Textbooks are all you need” paper from a few days ago. It’s programming focused and I think Python only. For general purpose LLM use, LLaMA is still better.
Behohippy@lemmy.worldto
AI@lemmy.ml•The AI Feedback Loop: Researchers Warn Of "Model Collapse" As AI Trains on AI-Generated ContentEnglish
1·3 years agoAny data sets produced before 2022 will be very valuable compared to anything after. Maybe the only way we avoid this is to stick to training LLMs on older data and prompt inject anything newer, rather than training for it.
Yep, I’m using an RTX2070 for that right now. The LLMs are just executing on CPU.
Do you recommend this email provider? Lots of people looking to get off gmail lately.
Are you running your own mail server? I only ever integrated Spamassassin with postfix.
Stable Diffusion (Stability AI version), text-generation-webui (WizardLM), a text embedder service with Spacy, Bert and a bunch of sentence-transformer models, PiHole, Octoprint, Elasticsearch/Kibana for my IoT stuff, Jellyfin, Sonarr, FTB Minecraft (customized pack), a few personal apps I wrote myself (todo lists), SMB file shares, qBittorrent and Transmission (one dedicated to Sonarr)… Probably a ton of other stuff I’m forgetting.
Yup, mostly running pretrained models for text embedding and some generative stuff. No real fine tuning.
Yup, typically we get into it after upgrading an older PC or something and instead of selling the parts, just turn it into a server. You can also find all sorts of cheap/good stuff on ebay from office off-lease.
Wow, a reply I made in another community ended up under this one. Yeah I’m doing a lot of work on local models and text embedding models for vector search.
SD mostly uses the GPU, so it’s pretty light on everything else. The largest process is probably web-chat-ui with Wizard-30b model running.
Behohippy@lemmy.worldto
Gaming@lemmy.ml•Sea of Thieves: The Legend of Monkey Island - Announcement TrailerEnglish
1·3 years agoI might try Sea of Thieves just for this.


The advancements in this space have moved so fast, it’s hard to extract a predictive model on where we’ll end up and how fast it’ll get there.
Meta releasing LLaMA produced a ton of innovation from open source that showed you could run models that were nearly the same level as ChatGPT with less parameters, on smaller and smaller hardware. At the same time, almost every large company you can think of has prioritized integrating generative AI as a high strategic priority with blank cheque budgets. Whole industries (also deeply funded) are popping up around solving the context window memory deficiencies, prompt stuffing for better steerability, better summarization and embedding of your personal or corporate data.
We’re going to see LLM tech everywhere in everything, even if it makes no sense and becomes annoying. After a few years, maybe it’ll seem normal to have a conversation with your shoes?