

He rips it eh? Not slams?
He rips it eh? Not slams?
Vertical mice do take a day or two to get used to for gaming, I kept pushing the thing sideways very slightly when clicking
Many credit card software providers also charge for the investigation of chargebacks, to the tune of hundreds of dollars, even if the chargeback is reversed.
Magic quotes were the single biggest mistake I’ve ever seen any language standard make.
https://foundation.pbs.org/ways-to-give/
Sucks that it will be up to us to keep big bird and spacetime
Fuck flipboard and their stupid ass non disable-able default action on old samsung devices. I’ll hold that grudge till I die
You hit one button on your phone and it opened flipboard, an app I’ve never once wanted to open. Insanity.
The default branch for some projects is “production” since CD deploys on pushing to that branch
For new projects, main. My thought is that even if master is not offensive, since the industry has generally made the change, the only reason to stick with master is stubbornness or hating political correctness, neither of which aligns with my self-view so I’ll use main and move on.
In general if people are genuinely hurt by the use of some words, I’m not sadistic so I’ll avoid using them. From my perspective morality is the pursuit of the reduction of suffering, even if that suffering is internal.
Fiverr and upwork are the standard starting places, their policies of keeping contractors on the site are rough, the cut they take is rough, and the competition is rough.
I’ve had success identifying specific software vendors with functional deficits and targeting customers of that software
I’m sure that’ll stop em
Juma a freak
The skibbidy toilet thing is pretty wild
Don’t get me wrong, journalism should be a paid profession in a capitalist society, just pointing out the irony (90s Alanis Morissette style).
While I do indeed hate paywalls and find modern journalism fundamentally broken, I recognize their utility.
I love the freedom of nihilism, our brain chemicals are the only thing making us happy or sad, they create morals, love, hate.
You know what, I guess in a way hormones are more human than the brain itself?
Apathy prevents violence, and will continue to do so until the vast majority of people are in constant pain and fear with absolutely no alternative. When the people have nothing to lose they will act.
Even the most politically aware and ethically minded among us can’t drop their daily lives in favor of standing up for the oppressed when standing on the razor edge of working every day to avoid homelessness.
Those who are not on the razor edge have “more to lose” by toppling the system
The irony of paywalling a piece positively portraying an anti paywall advocate is rich
I am not an expert in your field, so you’ll know better about the domain specific ramifications of using llms for the tasks you’re asking about.
That said, one of the pieces of my post that I do think is relevant and important for both your domain and others is the idempotency and privacy of local models.
Idempotent implies that the model is not liquid (changing weights from one input to the next), and that the entropy is wranglable.
Local models are by their very nature not sending your data somewhere, rather they are running your input through your gpu, similar to many other programs on your computer. That needs to be qualified with: any non airgapped computer’s information is likely to be leaked at some point in its lifetime so adding classified information to any system is foolish and short sighted.
If you use chatgpt for collating private, especially classified information, openai have explicitly stated that they use chatgpt prompts for further training so yes absolutely that information will leak not only into future models but also it must be expected to be leaked in such a way that it would be traceable to you personally.
To summarize, using local llms is slightly better for tasks like the ones you’re asking about, and while the information won’t be shared with any ai company that does not guarantee safety from traditional snooping. Using remote commercial llms though? Absolutely your fears are justified and anyone using commercial systems like chatgpt inputting classified information will absolutely both leak that information and taint future models with the info. That taint isn’t even limited to just the one company/model, the act of distillation means other derivative models will also have that privileged information.
TLDR; yes, but less so for local ai models.
Obviously this is the fuckai community so you’ll get lots of agreement here.
I’m coming from all communities and don’t have the same hate for AI. I’m a professional software dev, have been for decades.
I have two minds here, on the one hand you absolutely need to know the fundamentals. You must know how the technology works what to do when things go wrong or you’re useless on the job. On the other hand, I don’t demand that the people who work for me use x86 assembly and avoid stack overflow, they should use whatever language/mechanism produces the best code in the allotted time. I feel similarly with AI. Especially local models that can be used in an idempotent-ish way. It gets a little spooky to rely on companies like anthropic or openai because they could just straight up turn off the faucet one day.
Those who use ai to sidestep their own education are doing themselves a disservice, but we can’t put our heads in the sand and pretend the technology doesn’t exist, it will be used professionally going forward regardless of anyone’s feelings.
With blackjack and hookers, no doubt
Evolution, carbon dating, some physics topics