self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 5 points 9 months ago

holy fuck hetzner what in the fuck are you doing. please stop having the lowest tier of the abuse team try to trick the woem.men admin into accepting the shit non-apology legal drafted as a way to try and head off a lawsuit. nobody cares that your PR department got inconvenienced by the strong public response to this (in fact, it’s a good thing)

go and very publicly commit to making policy changes that’ll protect LGBTQ+ folks or fuck off. shit in a way that’ll be legally expensive and embarrassing if you don’t follow through or get off the fucking pot. having your low-level IT staff pretend you’re Some Guy LLC is a bad fucking look. we all know the guy whose only job is to send threatening emails and press the “close account” button doesn’t have any power to do anything of substance. why in fuck is that guy still in the woem.men admin’s inbox making demands?

[–] self@awful.systems 5 points 9 months ago

I’ve had the misfortune to watch the publicly-funded research to extremely expensive drug pipeline in action, and it fucking sucks to see the same ghouls who profit from that process turn around and claim that the resulting drugs are expensive because they had to pay for R&D (with, of course, the accompanying bullshit excuses that “only your insurance pays” or “we have a discount program”, which is small comfort when you’re staring down a $700+ monthly bill for the meds you need and both your insurance and the discount program have decided you don’t need it bad enough)

[–] self@awful.systems 7 points 10 months ago

note that this instance is also on hetzner (cause I was also fairly happy with them til now, other than a bit of missing functionality), so suggestions of reasonably-priced alternatives that don’t suck and will let us run NixOS are appreciated

though I might need a bit of convincing if those alternatives take the form of “OVH doesn’t suck now” or “go ahead and put your human flesh near Larry Ellison”

[–] self@awful.systems 21 points 10 months ago

With due caution by people who know what the fuck they’re doing.

this is one of the tip-offs I use to quickly differentiate AI crackpottery and legitimate ML. anything legitimate will prominently display its false positive and negative rates, known limitations, and procedures for fucking checking the data by hand (with accompanying warnings and disclaimers if you fail to do this). AI bullshit very frequently skips all that, because the numbers don’t look good and you’re more likely to get VC funding if you hide them

[–] self@awful.systems 5 points 10 months ago

that’s a great idea! the only BLC VMs I know of are written in a very obscure style (Tromp’s especially — his first interpreter was an entry into the International Obfuscated C Code Contest and he only posted the (relatively) unobfuscated one later) and I think there’s plenty of room for something written to be more comprehensible. I’m also not aware of any VM that implements call-cc from Krivine’s original paper, which has interesting applications. and of course, all the Krivine machines I know are relatively slow and very memory-inefficient — but there’s low hanging fruit here that can make things better.

one thing I might take on is implementing a visual krivine machine — something with a GUI that shows its current state and a graph of all the closures in memory. that would be a big boon for my current work, and I might see if I could graft something like that onto the simulation testbench for my HDL implementation.

[–] self@awful.systems 11 points 10 months ago (1 children)

co-creator of markdown

my opinion of Gruber immediately went from “fuck that guy” to “FUCK THAT GUY” when I realized he’s the same one who’s been writing the garbage non-specs that guarantee I’ll never have a good time parsing bbcode-but-everywhere

[–] self@awful.systems 5 points 10 months ago (2 children)

I have a scattered interest in lambda calculus too so I’d love to follow this project. Tromp’s BLC definitely hits a sweet spot of complexity/size when it comes to describing computation in a way that’s deeply satisfying.

exactly! it’s such a cool way to write a program, and it’s so much more satisfying than writing assembly for a von Neumann (or any load/store) machine. have you checked out LambdaLisp? it’s one of my inspirations for this project — it’s amazing that you can build a working Lisp interpreter out of BLC, and understanding how that was done taught me so much about Lisp’s relationship with lambda calculus.

I plan to release my HDL as a collaborative project once I’ve got enough done to share out. currently I’ve got the HDL finished for the combinational circuit that makes bitstream BLC processing efficient with word-oriented memory hardware, and I’m doing debugging on the buffer that grabs words from memory and offsets them if they represent a term that isn’t word-aligned (which is a pretty simple circuit so I’m surprised I’ve managed to implement so many bugs). there’s quite a bit left to go! IO is still a sticking point — I know how I want to do it, but I can’t quite imagine how memory and runtime state will look after the machine reads or writes a bit.

Have you looked into interaction nets/other optimal beta-reduction schemes (there’s a project out there called HVM)?

that seems awesome! I really like that it can do auto-parallelization, and I want to check out how it optimizes lambda terms. for now my machine model is a pretty straightforward Krivine machine with some inspiration taken from the Next 700 Krivine Machines paper, which seems likely to yield a machine that can be implemented as circuitry. that paper decomposes Krivine-like machine models down into combinators, which can be seen as opcodes, microinstructions, or (in my case) operations that that need to be performed on memory during a particular machine state.

once I’ve got the basic machine defined, I’d like to come back to something like HVM as a higher performance lambda calculus machine and see what can be adopted. one of their memory invariants in particular (the guarantee that each closure is only used once) maps really well to my mental model of what I imagine a hardware parallel lambda calculus machine would be like

[–] self@awful.systems 6 points 10 months ago

right? it’s a weird combination of these folks never engaging with the work they pretend to celebrate and trying to pretend that their AI fantasy will turn real life into a space opera. it’s fucking awful

[–] self@awful.systems 6 points 10 months ago* (last edited 10 months ago)

for anyone who’s fucking lost reading the above (I can’t blame ya), lambda calculus is the mathematical basis behind functional programming. this is a fun introduction. the only things you can do in lambda calculus are define functions, name variables, and apply functions to other functions or variables (which substitutes the variables for whatever they’re being applied to and eliminates the function). that’s all you need to represent every possible computer program, which is amazing

a Krivine machine is a machine for doing what the alligators in that intro are doing, automatically — that is, reducing down lambda functions until they can’t be reduced anymore and produce a final value. that process is computation, so a Krivine machine is a (rather strange) computer

[–] self@awful.systems 6 points 10 months ago (5 children)

sure! there was a little bit about it in the first stubsack and I posted a bit more about it in this thread on masto (with some links to papers I’ve been reading too, if you’d like to dig into the details on anything)

overall what I’m working on is a hardware implementation of a Krivine machine, which uses Tromp’s prefix code bitstream representation of binary lambda calculus as its machine language and his monadic IO model to establish a runtime environment. it isn’t likely to be a very efficient machine by anyone’s standard, but I really like working with BLC as a pure (and powerful) form of computational math, and there’s something pleasant about the way it reduces down to a HDL representation (via the Amaranth HDL in this case). there’s a few subprojects I’ve been working on as part of this:

  • the basic HDL implementation targeting open source FPGA synthesis and simulation
  • a hardware closure allocator and garbage collector
  • an assembler to convert lambda calculus expressions into their binary form (which starts to resemble ML with a bunch of high level capabilities, with very little code either in the assembler or in ROM on the device — that’s one part of what makes the work interesting)
  • a lazy version (Krivine machines are call-by-name, which is almost there, and the missing pieces needed for lazy evaluation look a lot like a processor cache but with more structure)
  • I have the intuition that the complete Krivine machine will be fairly light on FPGA resources, so I’d like to see how many I can synthesize onto one core with parallelism primitives, FIFOs, and routing included
  • lambda calculus machines can do arithmetic and high-level logic without an ALU, which is neat but extremely inefficient. I have some basic plans sketched up for an arithmetic unit that’d allow for a much more cycle and memory efficient representation of integers and strings, and a way to derive closures from them

I’ve been working on some of this on paper as a sleep aid for a while, but I’m finally starting on what’s feeling like a solid HDL implementation. let me know if you want more details on any of it! some of the more far off stuff is really just a mental sketch, but writing it out will at least help me figure out what ideas still make sense when they’re explained to someone else

[–] self@awful.systems 7 points 10 months ago (9 children)

fucking christ. it takes a lot to fuck up my day, but a quick scroll through that thread seeing how quick these vultures (including one notable person who’s the reason why I’m ashamed to talk about my lambda calculus projects) are trying to capitalize on Vernor’s legacy is absolutely doing it

HN wanted a black bar[1] but were denied.

why in the fuck? is the famous sci-fi author with a heavy CS background not notable enough for the standards of the site whose creator is a much less notable self-help author whose CS background is failing to make a working Lisp 3 times and writing programming textbooks nobody reads?

[–] self@awful.systems 7 points 10 months ago

I’ve been following it too, and hoping it yields a fork with better development priorities (and, frankly, developers) than lemmy, though I’m not at all looking forward to dealing with deploying Java and Go to production

view more: ‹ prev next ›