hersh

joined 1 year ago
[–] hersh@literature.cafe 3 points 1 month ago

Also worth mentioning: you might still need to add the "most recent visit" column under the View menu. And if you dare to actually load any of those pages, they'll move all the way to the top, and will not remain in their original location. It's really annoying.

[–] hersh@literature.cafe 4 points 1 month ago

Never liked him, but I acknowledge that he had some effective economic policies during his time as mayor. He was at least competent and sane. He went completely off the rails a long time ago, though.

He's often credited with cleaning up Times Square, which was known for prostitution back in the 80s. But honestly, he reaped what his predecessors sowed to a large degree.

He used 9/11 like his personal sword and shield. He was lucky to be in a prominent position related the biggest and least controversial issue in America. I don't imagine he ever would have been on the national stage otherwise. He was pretty much at the natural end of his career before then.

NYC has a history of conservative mayors, which seems a bit odd since we're so solidly liberal in federal elections. It sure doesn't help when we get a Democrat as infantile and corrupt as our current mayor, Eric Adams. See: https://en.wikipedia.org/wiki/Federal_prosecution_of_Eric_Adams

 
[–] hersh@literature.cafe 26 points 2 months ago

Whisper is open source. GPT-2 was, too.

[–] hersh@literature.cafe 9 points 2 months ago

Absolutely this. Phones are the primary device for Gen Z. Phone use doesn't develop tech skills because there's barely anything you can do with the phones. This is particularly true with iOS, but still applies to Android.

Even as an IT administrator, there's hardly anything I can do when troubleshooting phone problems. Oh, push notifications aren't going through? Well, there are no useful logs or anything for me to look at, so...cool. It makes me crazy how little visibility I have into anything on iPhones or iPads. And nobody manages "Android" in general; at best they manage like two specific models of one specific brand (usually Samsung or Google). It's impossible to manage arbitrary Android phones because there's so little standardization and so little control over the software in the general case.

[–] hersh@literature.cafe 16 points 2 months ago

I posted some of my experience with Kagi's LLM features a few months ago here: https://literature.cafe/comment/6674957 . TL;DR: the summarizer and document discussion is fantastic, because it does not hallucinate. The search integration is as good as anyone else's, but still nothing to write home about.

The Kagi assistant isn't new, by the way; I've been using it for almost a year now. It's now out of beta and has an improved UI, but the core functionality seems mostly the same.

As far as actual search goes, I don't find it especially useful. It's better than Bing Chat or whatever they call it now because it hallucinates less, but the core concept still needs work. It basically takes a few search results and feeds them into the LLM for a summary. That's not useless, but it's certainly not a game-changer. I typically want to check its references anyway, so it doesn't really save me time in practice.

Kagi's search is primarily not LLM-based and I still find the results and features to be worth the price, after being increasingly frustrated with Google's decay in recent years. I subscribed to the "Ultimate" Kagi plan specifically because I wanted access to all the premium language models, since subscribing to either ChatGPT or Claude would cost about the same as Kagi, while Kagi gives me access to both (plus Mistral and Gemini). So if you're interested in playing around with the latest premium models, I still think Kagi's Ultimate plan is a good deal.

That said, I've been disappointed with the development of LLMs this year across the board, and I'm not convinced any of them are worth the money at this point. This isn't so much a problem with Kagi as it is with all the LLM vendors. The models have gotten significantly worse for my use cases compared to last year, and I don't quite understand why; I guess they are optimizing for benchmarks that simply don't align with my needs. I had great success getting zsh or Python one-liners last year, for example, whereas now it always seems to give me wrong or incomplete answers.

My biggest piece of advice when dealing with any LLM-based tools, including Kagi's, is: don't use it for anything you're not able to validate and correct on your own. It's just a time-saver, not a substitute for your own skills and knowledge.

[–] hersh@literature.cafe 1 points 3 months ago (5 children)
[–] hersh@literature.cafe 59 points 5 months ago (19 children)

Is this legit? This is the first time I've heard of human neurons used for such a purpose. Kind of surprised that's legal. Instinctively, I feel like a "human brain organoid" is close enough to a human that you cannot wave away the potential for consciousness so easily. At what point does something like this deserve human rights?

I notice that the paper is published in Frontiers, the same journal that let the notorious AI-generated giant-rat-testicles image get published. They are not highly regarded in general.

[–] hersh@literature.cafe 17 points 5 months ago (3 children)

DuckDuckGo is an easy first step. It's free, publicly available, and familiar to anyone who is used to Google. Results are sourced largely from Bing, so there is second-hand rot, but IMHO there was a tipping point in 2023 where DDG's results became generally more useful than Google's or Bing's. (That's my personal experience; YMMV.) And they're not putting half-assed AI implementations front and center (though they have some experimental features you can play with if you want).

If you want something AI-driven, Perplexity.ai is pretty good. Bing Chat is worth looking at, but last I checked it was still too hallucinatory to use for general search, and the UI is awful.

I've been using Kagi for a while now and I find its quick summaries (which are not displayed by default for web searches) much, much better than this. For example, here's what Kagi's "quick answer" feature gives me with this search term:

Room for improvement, sure, but it's not hallucinating anything, and it cites its sources. That's the bare minimum anyone should tolerate, and yet most of the stuff out there falls wayyyyy short.

[–] hersh@literature.cafe 10 points 6 months ago* (last edited 6 months ago)
[–] hersh@literature.cafe 18 points 6 months ago (2 children)

Something like for-jay-yo.

From https://forgejo.org/faq/ :

Forgejo (pronounced /forˈd͡ʒe.jo/) is inspired by forĝejo, the Esperanto word for forge.

[–] hersh@literature.cafe 5 points 6 months ago

I recently upgraded to a 7900 XTX on Debian stable, as well. I'm running the newest kernel from Debian's backports repo (6.6, I think), and I didn't have that same problem.

I did have other problems with OpenCL, though. I made a thread about this and solved it with some trouble. Check my post history if you're interested. I hope it helps. I can take a closer look at my now-working system for comparison if you have further issues.

[–] hersh@literature.cafe 2 points 6 months ago (1 children)

IT WORKS NOW! I will need time to run additional tests, but the gist of my solution was:

  1. Backport llvm-18 from sid following the guide you linked at https://wiki.debian.org/SimpleBackportCreation

  2. After compiling and installing all those deb files, I then installed the "jammy" version of amdgpu-install_6.0.60002-1.deb from https://www.amd.com/en/support/linux-drivers

  3. Downloaded the latest kernel sources from https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git, and simply copied all the files from its lib/firmware/amdgpu folder into my system's /lib/firmware/amdgpu. Got that idea from https://discussion.fedoraproject.org/t/amdgpu-doesnt-seem-to-function-with-navi-31-rx-7900-xtx/72647

  4. sudo update-initramfs -u && sudo reboot

I'm not totally sure it step 3 was sane or necessary. Perhaps the missing piece before that was that I needed to manually update my initramfs? I've tried like a million things at this point and my system is dirty, so I will probably roll back to my snapshot from before all of this and attempt to re-do it with the minimal steps, when I have time.

Anyway, I was able to run a real-world OpenCL benchmark, and it's crazy-fast compared to my old GTX 1080. Actually a bigger difference than I expected. Like 6x.

THANKS FOR THE HELP!

 

I looked this up before buying the GPU, and I read that it should "just work" on Debian stable (Bookworm, 12). Well, it doesn't "just work" for me. :(

clinfo returns two fatal errors:

fatal error: cannot open file '/usr/lib/clc/gfx1100-amdgcn-mesa-mesa3d.bc': No such file or directory

fatal error: cannot open file '/usr/lib/clc/gfx1030-amdgcn-mesa-mesa3d.bc': No such file or directory

I get similar errors when trying to run OpenCL-based programs.

I'm running a backported kernel, 6.6.13, and the latest Bookworm-supported mesa-opencl-icd, 22.3.6. From what I've found online, this should work, though Mesa 23.x is recommended. Is it safe/sane to install Mesa from Debian Trixie (testing)?

I've also seen references to AMD's official proprietary drivers. They do not officially support Debian, but can/should I run the Ubuntu installer anyway?

I'm hoping to get this up and running without any drastic measures like distro hopping. That said, if "upgrade to Testing or Unstable" is the simplest approach, I am willing to entertain the idea.

Thanks in advance for any help you can offer.

view more: next ›