Shaner

joined 1 year ago
[–] Shaner@programming.dev 1 points 11 months ago

I see you want a screen saver and this just runs an app to view in your browser. There are other methods but they are a bit trickier. It's not too hard to add an embedded interpreter but you'd probably have to do some porting to get it drawing to a native canvas.

[–] Shaner@programming.dev 1 points 11 months ago (1 children)

You could do it pretty easily in Go.

Basically just use embedfs to add your javascript and HTML.

https://pkg.go.dev/embed

Here's a complete working example with a http server from that page:

package main

import (
	"embed"
	"log"
	"net/http"
)

//go:embed internal/embedtest/testdata/*.txt
var content embed.FS

func main() {
	mutex := http.NewServeMux()
	mutex.Handle("/", http.FileServer(http.FS(content)))
	err := http.ListenAndServe(":8080", mutex)
	if err != nil {
		log.Fatal(err)
	}
}
[–] Shaner@programming.dev 2 points 11 months ago* (last edited 11 months ago) (1 children)

Naw. I always think of customer engineer as having a large overlap with the person who quotes you a price for some project (building, car repair, etc).

They look at what you need and try to figure out how to use their companies software to make it happen. But the critical difference is they don't really build anything other than a demo or proof of concept. They might spec out something and give you a cost estimate. Or they might work with you to architect some piece (as in "hey you could use s3 here and dynamodb there and make sure that you don't have a single region point of failure".

At the end of the day it is sales. It's just trying to show people they can use your companies tools to do what they want.

Also I work for one of the cloud companies. I spent most of my career as a software engineer but the most common skill I use is really more devops stuff. Customers aren't asking me to design their business logic, they are often asking me to design their multi region high availability story.

[–] Shaner@programming.dev 4 points 11 months ago

Well if I remember correctly I was actually first told to use Solaris (Unix) because I knew just a teeny bit of HTML and had done some programming on my TI calculator. I had to use a Sunray workstation and learn ssh and emacs.

My boss took pity on me and bought me a computer (to be paid off with some extra hours). I attempted to install Debian on it, and failed. I tried Ubuntu and it worked (somewhere around 2005ish). It was all downhill from there. I did try some other distributions like arch but by that point I had a laptop and while I technically did get WiFi working and it was fun, I preferred the better out of the box hardware support you got from Ubuntu back in those days.

I've stuck when Ubuntu for the most part ever since. Even though the Linux guru at my university called it "Linux for office rats". I've tried some other variants like Mint and while I liked them more eventually I'd have to deal with the fact that the trickier stuff I want to run like CUDA just seems easier to get working. Pre built packages usually target Ubuntu.

I've played with alternative window managers like i3 here and there but once again I find it hard to make sure the real basic WiFi/sound/etc works the way I want it to and end up writing my own i3 status scripts or running with some sort of gnome-session thingy.

At the moment by desktop is basically "I don't care but there are shortcuts for my browser, graphical emacs, and the kitty terminal".

I'm not an evangelist because let me tell you from experience: your in-laws will not actually thank you for installing a low resource xfce based distribution on their computer. They will be unhappy and you'll get support calls. They want windows, just free.

But for me personally it's the most productive environment for what I do. I do not find macOS to be more stable or frankly to be more coherent. I love their font rendering and hidpi support though.

[–] Shaner@programming.dev 9 points 1 year ago (1 children)

I wrote a DNS server that did global software load balancing. Essentially it just has a health checking component and a sort and uses that to determine the closest healthy endpoint to return.

Mostly used for cluster failover or in cloud terms it can keep traffic within a zone if possible, otherwise within a region, otherwise closest region.

The reason it was my favorite project is because I was unqualified, but nobody else on my team was a DNS expert. So I got to drink from the firehose and learn. I had a really good testing feedback pipeline where basically visitors to our website did a couple extra background requests on their first page load and we used the web performance timing API to track DNS lookup times and TCP/HTTP times. So I every time I made a change I had millions of performance reports. I could see the impact of my changes in about 60 seconds in grafana.

Between learning something totally new and tying it to a short feedback loop with beautiful graphs I had a great time. Plus that product literally allowed my company to start using the cloud and build multi cloud systems.

[–] Shaner@programming.dev 1 points 1 year ago (3 children)

I ran across this today: https://graphite.dev/blog/how-large-prs-slow-down-development

They describe just the problem you are experiencing: change amplification.

Contrary to some comments this is not a sign of good architecture. It may be needed at your company, but if I was betting I'd bet it's not.