

There are thousands of starving furry artists out there who would be happy to take your commission.


There are thousands of starving furry artists out there who would be happy to take your commission.


The conversion of the output to tokens inherently loses a lot of the information extracted by the model and any intermediate state it has synthesized (what it “thinks” of the input).
Until the model is able to retain its own internal state and able to integrate new information into that state as it receives it, all it will ever be able to do is try to fill in the blanks.


The size of the context window is fixed in the structure of the model. LLMs are still at their core artificial neural networks, so an analogy to biology might be helpful.
Think of the input layer of the model like the retinas in your eyes. Each token in the context window, after embedding (i.e. conversion to a series of numbers, because ofc it’s just all math under the hood), is fed to a certain set of input neurons, just like the rods and cones in your retina capture light and convert it to electrical signals, which are passed to neurons in your optic nerve, which connect to neurons in your visual cortex, each layer along the way processing and analyzing the signal.
The number of tokens in the context window is directly proportional to the number of neurons in the input layer of the model. To make the context window bigger, you have to add more neurons to the input layer, but that quickly results in diminishing returns without adding more neurons to the inner layers to be able to process the extra information. Ultimately, you have to make the whole model larger, which means more parameters, which means more data to store and more processing power per prompt.


I’d bet money that the Disney deal falling through was because OpenAI couldn’t guarantee that Sora couldn’t be used to generate porn of their characters, since attackers will almost certainly always find new prompt injections.
Surprise surprise, it’s a giant fucking black box that you can never have complete control over.


If you prompted an LLM to review all of it’s database entries, generate a new response based on that data, then save that output to the database and repeat at regular intervals, I could see calling that a kind of thinking.
That’s kind of what the current agentic AI products like Claude Code do. The problem is context rot. When the context window fills up, the model loses the ability to distinguish between what information is important and what’s not, and it inevitably starts to hallucinate.
The current fixes are to prune irrelevant information from the context window, use sub-agents with their own context windows, or just occasionally start over from scratch. They’ve also developed conventional AGENTS.md and CLAUDE.md files where you can store long-term context and basically “advice” for the model, which is automatically read into the context window.
However, I think an AGI inherently would need to be able to store that state internally, to have memory circuits, and “consciousness” circuits that are connected in a loop so it can work on its own internally encoded context. And ideally it would be able to modify its own weights and connections to “learn” in real time.
The problem is that would not scale to current usage because you’d need to store all that internal state, including potentially a unique copy of the model, for every user. And the companies wouldn’t want that because they’d be giving up control over the model’s outputs since they’d have no feasible way to supervise the learning process.


I only have a rather high level understanding of current AI models, but I don’t see any way for the current generation of LLMs to actually be intelligent or conscious.
They’re entirely stateless, once-through models: any activity in the model that could be remotely considered “thought” is completely lost the moment the model outputs a token. Then it starts over fresh for the next token with nothing but the previous inputs and outputs (the context window) to work with.
That’s why it’s so stupid to ask an LLM “what were you thinking”, because even it doesn’t know! All it’s going to do is look at what it spat out last and hallucinate a reasonable-sounding answer.


No, but that won’t stop me from having fun trying, will it?


“We have always been at war with Eastasia in the Middle East.”


“The nonce reuse issue seems to be a valid security issue, but it is by no means a critical vulnerability: it only affects applications that do more than four billion encryptions with a single HPKE setup,” said Valsorda. “The average application does one.”
No implementation should be using the same asymmetric keypair for a key exchange* more than once. This is such a non-issue that it’s kind of hilarious. Sounds like the reporter was trying so desperately to get credit for anything they could put on their portfolio, and just wouldn’t take “no” for an answer.


In my experience, around 50% of (professional) developers do not take pride in their work, nor do they care.
I agree. And in my experience, that 50% have been the quickest and most eager to add LLMs to their workflow.


I realized the fundamental limitation of the current generation of AI: it’s not afraid of fucking up. The fear of losing your job is a powerful source of motivation to actually get things right the first time.
And this isn’t meant to glorify toxic working environments or anything like that; even in the most open and collaborative team that never tries to place blame on anyone, in general, no one likes fucking up.
So you double check your work, you try to be reasonably confident in your answers, and you make sure your code actually does what it’s supposed to do. You take responsibility for your work, maybe even take pride in it.
Even now we’re still having to lean on that, but we’re putting all the responsibility and blame on the shoulders of the gatekeeper, not the creator. We’re shooting a gun at a bulletproof vest and going “look, it’s completely safe!”


6-pack of Natty Ice


deleted by creator
I think the best way to quickly make this check completely useless isn’t to rip it out entirely, but to make the kernel generate a random age between 21-99 every time an application requests it.
I say this in the most loving and accepting way possible, but I 100% think autism was somehow involved in the creation and spread of the butt trumpets.
I’m convinced of this because I know with absolute certainty in my heart that if any of my friends on the spectrum were alive in the 13th century, they’d be sent to the seminary for being weird and would spend their days doodling butt trumpets in the margins of manuscripts.


Buc-ees is the only good thing about Texas.


Yeah, that was my question. Why the hell would they develop new silicon when 99% of their fab space is dedicated to feeding the AI bubble?


If it can use milk that would otherwise be thrown out, due to contamination or expiration, that wouldn’t be so bad.


There’s no way in hell Paramount has that much cash available. This is gonna be a leveraged buyout of epic proportions, and it’s gonna crash and burn epically too.
It would need to be able to form memories like real brains do, by creating new connections between neurons and adjusting their weights in real time in response to stimuli, and having those connections persist. I think that’s a prerequisite to models that are capable of higher-level reasoning and understanding. But then you would need to store those changes to the model for each user, which would be tens or hundreds of gigabytes.
These current once-through LLMs don’t have time to properly digest what they’re looking at, because they essentially forget everything once they output a token. I don’t think you can make up for that by spitting some tokens out to a file and reading them back in, because it still has to be human-readable and coherent. That transformation is inherently lossy.
This is basically what I’m talking about:
But for every single token the LLM outputs. The fact that it’s allowed to take notes is a mitigation for this context loss, not a silver bullet.