• 2 Posts
  • 364 Comments
Joined 9 个月前
cake
Cake day: 2025年6月4日

help-circle




  • I like interfaces as a supplement to inheritance. The strength of inheritance is getting all of the internal functionality of the parent class, while still allowing you to differentiate between children.

    Interfaces are useful for disparate classes which don’t have much in common besides fitting within a specific use case, rather than classes that are very similar to each other but need specific distinguishing features.












  • SparroHawc@lemmy.ziptoFuck AI@lemmy.worldApophenia
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    9 天前

    You misunderstand. The outcome of asking an LLM to think about an apple is the token ‘Okay’. It probably doesn’t get very far into even what you claim is ‘thought’ about apples, because when someone says the phrase “Think about X”, the immediate response is almost always ‘Okay’ and never anything about whatever ‘X’ is. That is the sum total of its objective. It does not perform a facsimile of human thought; it performs an analysis of what the most likely next token would be, given what text existed before it. It imitates human output without any of the behavior or thought processes that lead up to that output in humans. There is no model of how the world works. There is no theory of mind. There is only how words are related to each other with no ‘understanding’. It’s very good at outputting reasonable text, and even drawing inferences based on word relations, but anthropomorphizing LLMs is a path that leads to exactly the sort of conclusion that the original comic is mocking.

    Asking an LLM if it is alive does not cause the LLM to ponder the possibility of whether or not it is alive. It causes the LLM to output the response most similar to its training data, and nothing more. It is incapable of pondering its own existence, because that isn’t how it works.

    Yes, our brains are actually an immensely complex neural network, but beyond that the structure is so ridiculously different that it’s closer to comparing apples to the concept of justice than comparing apples to oranges.


  • You can tell a person to think about apples, and the person will think about apples.

    You can tell an LLM ‘think about apples’ and the LLM will say ‘Okay’ but it won’t think about apples; it is only saying ‘okay’ because its training data suggests that is the most common response to someone asking someone else to think about apples. LLMs do not have an internal experience. They are statistical models.