

Marginalia should be one of the most important things to preserve, in a similar importance to Wikipedia.
Marginalia should be one of the most important things to preserve, in a similar importance to Wikipedia.
Yeah, the best is never going to be “now”, which is always drown in uncertainty and chaos. When you look back, everything looks safe and deterministic.
The problem is how the whole thing was presented to people. You just need to pass by subreddits related to ChatGPT to see the amount of misunderstandings about how it works, just an example:
https://www.reddit.com/r/ChatGPT/comments/1ld6dot/a_close_friend_confessed_she_almost_fell_into/
https://www.reddit.com/r/ChatGPT/comments/1koadmg/testing_gpts_response_to_delusional_prompts_it/
https://www.reddit.com/r/ChatGPT/comments/1low386/this_is_what_recursion_looks_like/
This whole thing is kinda scary. About how easily some people can spiral into delusion when over-relying on LLMs.
These models fill gaps with plausible-sounding but often enough fabricated information.
It’s understandable how non technical users treat their outputs as profound revelations, mistaking AI-generated fiction for hidden truths.
I’m just thinking now that the Mac is next.
I thought that as much as these companies preach about LLMs doing their coding, the cost of development would go down, no? So why does it need to reduce everything to a single code base to make it easier for developers?
All I see is people chatting with an LLM as if it was a person. “How bad is this on a scale of 1 to 100”, you’re just doomed to get some random answer based solely on whatever context is being fed in the input and that you probably don’t know the extent of it.
Trying to make the LLM “see its mistakes” is a pointless exercise. Getting it to “promise” something is useless.
The issue with LLMs working with human languages is people eventually wanting to apply human things to LLMs such as asking why as if the LLM knows of its own decision process. It only takes an input and generates an output, it won’t be able to have any “meta thought” explanation about why it outputted X and not Y in the previous prompt.
I just wish I’m long gone before humanity descends into complete chaos.
Or the most common cases can be automated while the more nuanced surgeries will take the actual doctors.
They might, once it becomes too flooded with AI slop.
This is quite funny actually.
I like the saying that LLMs are “good” at stuff you don’t know. That’s about it.
When you know the subject it stops being much useful because you’ll already know the very obvious stuff that LLM could help you.
And I don’t care if something is written by AI. As people we care about the quality of the output.
We know AI by default just creates slop but with a human in the loop, it’s possible to get inspiration for scenes, brainstorming, discuss ideas etc.
I think a good writer would use it this way.
That’s Game Theory right there.
I wish. My mom is like a zombie on Facebook for maybe 4 years.
The algorithms optimized for engagement with no ethics was the point the world starts going downhill.
Yeah, but the kernel is a low-level module that handles hardware, memory, and processes—it’s not what users interact with directly, so sharing the same kernel doesn’t make it all that similar as you’d think.
What makes Linux feel like ‘Linux’ to users is the stuff on top: the userland—bash, coreutils, package managers, X11/Wayland, etc. Android replaces almost all of that, so even though it uses the Linux kernel, it doesn’t feel like Linux.
But… but… those good ol’ days felt so good! We need to relive those days!
I’ll wait until they kill it two months from now.
I gotta say, I actually enjoyed the time programming for BlackBerry. It was the only time I actually did C++/Qt professionally. And the APIs were very inspired on the iOS/MacOS ones, so it was kinda easy for me to migrate later to iOS.
But just the same way, the guys in the university lab back then got a few BB10 devices just for sending apps to their app store.
Yeah, these claims seem very vague. I’d like to see how all that works, with examples.