

AI writing tools — improve, summarize, translate, and more (Anthropic / OpenAI)
why though


AI writing tools — improve, summarize, translate, and more (Anthropic / OpenAI)
why though


Reminder that Google is on it’s third attempt to prevent or limit the installation of APKs outside of the Play Store, meaning they are near the point they can effectively control what apps people run.
Maybe you did take great moon pictures. Maybe the picture was a white blob and Samsung just added those craters for you.


The output of one pioneering model has become almost indistinguishable from another, forcing companies to try to differentiate their products.
This is corpo speak for “LLMs aren’t going to get much better than this, which is what research had shown months ago, but we can’t let users notice so we will create more wrappers, shove it into more apps, mask the mistakes, keep changing the tone and adding floating visualizations to distract you from the fact they’re still shit”
Reminds me of when Samsung got caught placing a moon PNG texture on top of blurry images of a lamp, if they were round enough, in essence faking their “amazing zoom and night mode moon photography”


Books aren’t an answer to this question if you need quick information you couldn’t have predicted you’d need, and therefore do not own a book about.


Now somebody needs to post about this on Reddit, so The Verge can make an AI generated piece based on the post!


Wikipédia is fighting an ongoing and ever increasing issue of collaborators pushing AI text into articles, which contain false information.
EDIT: Why the downvotes? Your misinformation is not my fault:
1 in 20 new Wikipédia pages contain AI-generated text: https://www.newscientist.com/article/2454256-one-in-20-new-wikipedia-pages-seem-to-be-written-with-the-help-of-ai/
Wikipédia attempts to maintain a task force to clean the articles poisoned by AI: https://en.wikipedia.org/wiki/Category:Articles_containing_suspected_AI-generated_texts
AI translations are poisoning non-EN Wikipédia: https://wistkey.medium.com/ai-translations-are-poisoning-wikipedia-and-putting-minority-languages-at-risk-c4539984734c
AI “contributions” are bringing up recycled information and fake sources, and the human verification often fails: https://media.ccc.de/v/39c3-ai-generated-content-in-wikipedia-a-tale-of-caution
But sure, keep downvoting me and ignoring the issue, geniuses.
You know I used to think ai was pretty cool because I read a lot of sci-fi
You read a lot of sci-fi and your conclusion was the AI is depicted as good???


What the fuck


I’m actually exhausted at the hypersexualization of media.
I’m not a prude, I’m glad to live in a time where a sex scene can show up uncensored in a TV show. Adult content made for adults, sure, go nuts.
But must everything now have as much sex as possible to sell? Why must every game, every social media feed, every manga, every book, be filled with excessive porn? The problem is not gooner bait existing, the problem is everything becoming gooner bait


And this is one of the very rare situations in which I will opt in for data collection.
Having robust data on FPS per hardware component per game, over the entirety of Steam, is beyond useful.


I’m a volunteer maintainer for matplotlib
DO NOT TOUCH my precious matplotlib, I’ll HUNT DOWN any AI slop shithead damaging MY BABY


wait you have to verify your grades…from high school!?
Yep. More than once.


There’s an opening for a position you’re interested in, so you fill one of these long “everything is in your CV, but you need to retype on our formulary again” registrations.
A week later, they contact you via email. A very enthusiastic person, signing by their own name and no automated HR shenanigans claims to have enjoyed what they saw and you’re through to the next step. There’s some corporate “we are an amazing workplace for excepcional people” fluff, but nothing terrible, so let’s proceed.
You now have a week to type and send a document that is got nothing to do with the position or your technical skills. You need to type a biography, you need to describe how you were in high school, were you social? You need to show proof of your high school grades, then college, you need to give a biographical memoir of your life. But sure, it’s Canonical right, great for your career so you proceed.
So then a technical interview comes up. You have a time limit to fill it out, but the questions won’t be actually deep enough to test your skills - they would just veto somebody with zero idea of what’s going on, so it becomes tedious. A child with an AI Chatbot can probably score enough.
So then you move on to an IQ test, with baffling things such as tests of reaction time (if I ever needed fast reaction times in my field of study, ring the bell because a zombie apocalypse just ruptured our office building).
You’re tired of the bs, but they email you three times in a row telling you about the deadlines for completion. Now somebody wants to speak with you, and guess what, they haven’t checked your CV, or your biography, or your results from the tests, so get used to explaining everything again.
You’ll have quite a few meetings like this, always moving up to the “higher ups” that are equally unaware of who you are, until you reach a VP. And then they put you on hold… so hope things work out, because they actually can leave you in hold forever, answer that the position is actually no longer available, or finally hire you. They have KPIs that incentivize having “candidates being evaluated” which means keeping you on limbo at the end of the process is a great result on their dashboards. Oh, and don’t trust the “oh we loved you, you’re in, let’s sign next week” because the probability of not signing is still high.


After experiencing Canonical’s recruitment process, which they claim to be extremely proud of, I can only imagine that if the other departments operate with a similar mindset the entire company is a non-sensical hell.


I can’t help but to feel like all these monthly headlines of “insiders claim AI became sentient” or “They are TOO AFRAID of the NEWEST HIDDEN MODEL they’re testing” is just a marketing campaign.
The idea is creating the illusion that they’re constantly on the brink of delivering the new incredible model that finally thinks, reasons and executes like a sentient being. And of course, they’re nowhere near that (and apparently LLMs will never reach that point anyway) so they need to keep creating these headlines so CEOs head just the title during lunch and think “man, I’m glad my company is paying for Claude, we don’t want to be left out when this becomes public”.
Just use any LLM. Use Claude. It’s ridiculous, you can detect the patterns each of them use to write in five minutes, you can identify their flawed “logic”, their limitations, the fact their output is liquid ass, they’re just bad. Look at the hyped GPT 5 release - which is as dumb and annoying as GPT 4, only with a few safeguards built in and a shift in tone. That’s it, that’s the “new model”. You can show me benchmarks of the new model being 45% better than the previous, but then you put it through any other test that isn’t a well known publicly available benchmark and it fails catastrophically, because it’s dumb and the training set is preparing it for that specific test.
So I don’t give a shit how many senior employees at OpenAI, Anthropic, Google, Twitter, end up resigning - this is a marketing stunt. They collect their generous compensation for leaving, they retire and live happily, and the company gets a fresh new headline talking about how amazing their new model is to the point it scares humans, which is all they need to get a new round of funding. They’re not profitable, so it doesn’t matter if the product ends up existing, they just want to leverage FOMO from investors forever.


I’m really loving the game… do I want to risk and look at the list… Sigh, I’ll look at the list
Any special features you’re looking for? Fossify messages works fine for me, and it supports both SMS and MMS. In fact, the entire Fossify suite of apps works really well in general.