I’ve always wondered if this sort of tech could scale to 5g towers for mass surveillance purposes
- 0 Posts
- 7 Comments
Chulk@lemmy.mlto Technology@lemmy.world•Men are opening up about mental health to AI instead of humansEnglish242·24 days agoAlso worth noting that:
1. AI is arguably a surveillance technology that’s built on decades of our patterns
3. Large AI companies like OpenAI are signing contracts with the Department of defense
If I were a US citizen, I would be avoiding discussing my personal life with AI like the plague.
Chulk@lemmy.mlto Palestine@lemmy.ml•[Video] Israeli colonist blocking ambulance from passingEnglish10·27 days agoThat ambulance driver is a better person than I am.
Removed by mod
Removed by mod
Removed by mod
Political intervention is what started Google, so I don’t see the problem.
How about taking responsibility and just not using services that require it.
Google has shaped the web into what it is over decades so that they could maintain their position of power. This is the very essence and purpose of a monopoly. Yet here you are trying to blame anything but the monopoly for the monopoly’s existence.
Nothing like convincing hundreds of millions of people to abandon a company rather than put any pressure on the small group of greedy people who own it.
Chulk@lemmy.mlto Technology@lemmy.world•ChatGPT Mostly Source Wikipedia; Google AI Overviews Mostly Source RedditEnglish0·1 month agoYou shouldn’t cite Wikipedia because it is not a source of information, it is a summary of other sources which are referenced.
Right, and if an LLM is citing Wikipedia 47.9% of the time, that means that it’s summarizing Wikipedia’s summary.
You shouldn’t cite Wikipedia for the same reason you shouldn’t cite a library’s book report, you should read and cite the book itself.
Exactly my point.
Chulk@lemmy.mlto Technology@lemmy.world•ChatGPT Mostly Source Wikipedia; Google AI Overviews Mostly Source RedditEnglish2·1 month agoThroughout most of my years of higher education as well as k-12, I was told that sourcing Wikipedia was forbidden. In fact, many professors/teachers would automatically fail an assignment if they felt you were using wikipedia. The claim was that the information was often inaccurate, or changing too frequently to be reliable. This reasoning, while irritating at times, always made sense to me.
Fast forward to my professional life today. I’ve been told on a number of occasions that I should trust LLMs to give me an accurate answer. I’m told that I will “be left behind” if I don’t use ChatGPT to accomplish things faster. I’m told that my concerns of accuracy and ethics surrounding generative AI is simply “negativity.”
These tools are (abstractly) referencing random users on the internet as well as Wikipedia and treating them both as legitimate sources of information. That seems crazy to me. How can we trust a technology that just references flawed sources from our past? I know there’s ways to improve accuracy with things like RAG, but most people are hitting the LLM directly.
The culture around Generative AI should be scientific and cautious, but instead it feels like a cult with a good marketing team.
Yep, everything is politics whether we like it or not.