hosaka

joined 1 year ago
[–] hosaka@programming.dev 1 points 1 month ago (1 children)

If only it wasn't paywalled

[–] hosaka@programming.dev 2 points 1 month ago (2 children)

HTMX is great by I don't think it's what OP needs since the input and desired output is not hypermedia in the first place.

[–] hosaka@programming.dev 3 points 1 month ago

Honestly not sure about Swagger, I've only ever used swagger-ui to show the API docs on a webpage. OpenAPI as a standard and openapi-generator are not abandoned and quite active. I'll give you an example of how I use it.

I have a FastAPI server in python that defines some endpoints and data models that it can work with, it exports an openapi.json definition. I also have a common schemas library defined with pydantic that also exports an openapi.json (python was chosen to make it easier for other team members to make quick changes). This schemas library is also imported in the FastAPI app, basically only the data models are shared.

I use the FastAPI/openapi.json to generate C++ code in one application (the end user app) using the openapi-generator-cli, serialize/deserialize is handled by the generated code, since the pydantic schema is a dependency of the FastAPI server, both the endpoints and data models get generated. The pydantic/openapi.json is also used by our frontend written in typescript to generate data models only since the frontend doesn't need to call FastAPI directly but it has an option to in the future by generating from FastAPI/openapi.json instead.

This ensures that we're using the same schema across all codebases. When I make changes to the schema, the code gets re-generated and included in the new c++/web app builds. There's multiple ways to go about versioning, but for data only schema I'd just keep it backwards compatible forever (by adding new props as optional field rather than required and slowly deprecating/removing props that are no longer used).

I found this to be more convoluted than just using something like gRPC/Protobuf (which can also be serialized from JSON), I've used it before and it was great. But for other devs that need to change a few lines of python and not having to deal with protobuf compiler, it's a more frictionless solution at the cost of more moving parts and some CICD setup on my side.

[–] hosaka@programming.dev 6 points 1 month ago (12 children)

Use Open API schema. You can define data models and endpoints or just the models, I do this at work. Then generate your code using openapi-generator.

[–] hosaka@programming.dev 4 points 2 months ago (1 children)

Double Commander is also worth mentioning

[–] hosaka@programming.dev 7 points 2 months ago* (last edited 2 months ago) (1 children)

I think it's is not aimed to protect against potential attacks, this is aimed at a developer using/writing modules of code. This is not a security guard

[–] hosaka@programming.dev 1 points 3 months ago

Glad you figured it out! A separate network for a set of services that need to talk to eachother is the way I do it for my selfhosted tools, if you want some more ideas on setting up the *arr apps using docker compose, this is my current setup: https://github.com/hosaka/selfhosted/blob/main/servarr.yml

[–] hosaka@programming.dev 1 points 3 months ago* (last edited 3 months ago) (2 children)

I think you're using docker internal IPs, which are not static and can change between docker compose runs. You can instead address them by name if you connect then to a same virtual network: https://docs.docker.com/compose/networking/#specify-custom-networks

This allows two service to "see eachother". For example "calibre:8081" will resolve to an internal IP address. I'm general, this is a better approach when you need to connect apps to each other.

[–] hosaka@programming.dev 6 points 3 months ago (1 children)

When setting up nvim-treesitter neither clang nor msvc worked. Rather, it worked and compiled the necessary libs but the treesitter plugin failed to load the necessary .so libs. The common troubleshooting steps didn't help (setting up clang as preferred compiler etc.), so I just ended up installing zig and that helped to get it working.

[–] hosaka@programming.dev 1 points 3 months ago

Also allows you to use hardware acceleration for inference. Quite a comprehensive set of tools actually, also the new revamped UI is on the horizon with version 0.14

[–] hosaka@programming.dev 6 points 4 months ago (1 children)

In a game that is production ready you would be going through individual assets with the person who designed them and you'd establish when to spawn and despawn them. As designers tend to go crazy and not worry about memory at all, I tend to guide them to think about memory availability in a particular scene. Really depends on the game you're making though

[–] hosaka@programming.dev 2 points 4 months ago (1 children)

You can push mirror your fork back to GitHub when you deem necessary (e.g when it's in a good shape) and create a PR to the parent repo automatically using forgejo runner script, you'd just need to make an API token. If the goal is to automate PRs. If the goal is to not use GitHub for your forks but still continue to make PRs, you can't work around that I think. Unless there's a way to PR a bunch of patch files perhaps?

view more: next ›