• 1 Post
  • 40 Comments
Joined 1 year ago
cake
Cake day: August 7th, 2023

help-circle
  • Norway has something similar, you own the inside usually and the HOA own the outside, including the houses themselves. Live in one, largely a good thing but some things come slow since they need to be voted for of course. Generally worth it, since you get good deals on things like internet. It’s cheaper but it’s also something you usually have to use and the only option. Eg only that provider of internet.

    I’m my case, they are also responsible for my balanced ventilation, my exterior doors and my water heater. So when the time comes, they handle it. Shared costs cover snow plowing, the shared community building, upkeep of garage, outdoors and the buildings, and things like water bills and taxes paid. In particular, HOAs purchases do not need to pay a 2.5% of the purchase price fee when you purchase a home. This itself saves you quite a bit, and makes up for some of the extra you pay in monthly costs. (but pretty much all of those are at least going somewhere that benefit you anyways)

    The downsides are, there are special rules so some people that have membership may have a right to take over the winning bid in a sale. I myself used this to purchase my place, having gotten 10 years of seniority in “HOA company”. You spend the seniority with your purchase, but also are not allowed to own more than one part. Also, no long term renting so there aren’t any companies buying and renting out and things like that. You have to live in the HOA.


  • If you use it frequently, I suggest getting a GUI that have profiles or remember options so you don’t have to mess with commands all the time. I wrote my own little command line wrspper which is Windows only since I don’t have Linux to test on. Though it shouldn’t take much effort to add support.

    Makes it much more convenient when you don’t have to specify things like archive (ignore duplicates), filename to be “artist - title” (where possible), download destination, etc. Just alt-tab, Ctrl-v, Enter. And the download is running. And mine also has parallel downloads and queue for when you got many slow downloads.






  • Quick google shows that Kanban is a method. Mainlu around picking up things as the come, but also limiting how much can happen at once.

    The project I’m has a team that uses Kanban for the “Maintenance” tasks/development, take what is at the top of the board and do it. Adapt if higher priority things comes around, such as prod bugs. Our developments teams are trying to implement Scrum, where interruptions are to be avoided if possible during sprints. You plan a sprint, try to do that work, and can present it, and iterate when users inevitably changes criteria.

    In the meme, kanban does somewhat make sense, since getting armrests is never going to get a high priority as part of building a rocket. Scrum isn’t exactly right, but I can see where it’s coming from. They are all agile methods though.


  • I kinda get where he is coming for though. AI is being crammed into everything, and especially in things where they are not currently suited to be.

    After learning about Machine learning, you kind realize that unlike “regular programs” that ML gives you “roughly what you want” answers. Approximations really. This is all fine and good for generating images for example, because minor details being off of what you wanted probably isn’t too bad. A chat bot itself isn’t wrong here, because there are many ways to say the same thing. The important thing is that there is a definite step after that where you evaluate the result. In simpler ML you can even figure out the specifics of the process, but for the most part we evaluate what the LLM said or if the image is accurate to our expectations. But we can’t control or constrain the output to exactly our needs, because our restrictions largely are just input in a almost finished approximation engine.

    The problem is, that companies take these approximation engines, put them in their product and consider their output fact. Like Ai chatbots doing customer support, and make up facts like the user that was told about rules that didn’t exist for an airline, or the search engines that parrot jokes or harmful advice. Sure you and I might realize that these things come from a machine that doesn’t actually think about it’s answers, but others don’t. And throwing a “*this might be wrong because its AI” on it is not an acceptable waiver of accountability.

    Despite this, I use chatgpt and gemini a lot to help me program, they get a lot of things wrong but also do great. It’s a great tool, exactly because I step in after the approximation step, review and decide. I’m aware of the limits. But putting these things in front of “users” without a review step means you are advertising that you are either unaware of this flaw, or just see the cost-benefit analysis and see that if noting else it’ll generate interest during the hype.

    There is a huge potential, but throwing AI into a situation where facts are needed when it’s only making rough guesses, is the wrong way about it.




  • It’s worth adding I greatly prefer MS Auth style authentication, since I don’t have to find the right entry to read the Auth code and then write it on the other computer. Instead MS pops a notification and you either type or select the right number, verify with fingerprint and done. Much more convenient.

    It often tells you what you login into and where you are attempt to log in from, so it’s a few extra layers of security for those that have that awareness to check those details.




  • I’m in the MPC-HC gang on Windows. Just so much more practical than other players. The main selling point was that full-screen the controls go away once you move the cursor off them, it was amazing. And no waiting for subs to be processed like VLC had to back then, never turned back so don’t know if that is still a thing.


  • Just learning Rust for fun, but decided I wanted to make a simple website. I don’t like web stuff that much, but seen htmx, so I gave that a shot. Found popular actix for the server side, and set out to make a simple blog.

    Making a page is simple, using htmx is also simple. Setting out to create an blog that is all in a single evolving page? Not so much. Either you don’t get the essential back and forward navigation, or you add that but a site refresh will call just the partial endpoint and screw things up. There’s some quite nice work arounds, but at the end result is that sometimes going back will leave me on a blank site in one step.

    I’m probably going to settle for each blog entry being a seperate page if I make the site public. Or just let the small flaws be there, because I hate sites these days being slow. So loading literally only the text/html that’s supposed to change is very cool.

    Next steps is going to remove chances of path traversal and reading literally any file on disk by modifying urls…, some markdown to html crate, and see how image loading works. If I ever get around to any of it.


  • I made do with my IDE, even after getting a developer job. Outside shenanigans involving a committed password, and the occasional empty commit to trigger a build job on GitHub without requiring a new review to be approved, I still don’t use the commandline a lot.

    But it’s true, if you managed to commit and push, you are OK. Even the IDE will make fixing most merges simple.




  • And that’s is the part that irks me a little. How should I expect anything when I don’t expect the undefined behavior to begin with?

    Say I manage to accidentally do something undefined, I do some math incorrectly on some index, and try to read out of bounds on an array that didn’t implement bound guards. Now already it’s my fault for several reasons, but in a complex project what the array is, and the details of the array may be “vague”, especially if it’s not something you did yourself in the project. So as a scenario, it’s not completely out there. (some other dev knew the index was “always” right, and did premature optimization and used unguarded arrays instead) Still completely avoidable, but it can happen.

    But if only an edge case actually leads to an out of bound read, the problem will probably never happen where the issue is. An experienced dev might never step into this mess, but sometimes this happens when other people change up what others did. I’ve had similar problems at my workplace, just not with undefined behavior as a result. At the end of the day, you sitting there with hard to know issues that have hard to know consequences.

    This doesn’t require a special programming language to solve, it just requires a guarded array first and foremost, tests and good reviews might see the bug as well before production. Which in C++ we were taught about from the beginning. If performance actual was a problem, then I guess we’d maybe still end up with this bug in the example. But my point is something along the lines of, all the good practice comes down to the choices of the individual developer. And the choices of one, also affects the next one.

    If we could instead place those choices a step up in the chain however. Have the language enforce safety, unless you specifically say you need the be unsafe. So in my example, other the code would be safe already or someone threw and unsafe block to do their “fast” read of the array. In one case, I’ll make a crash due to a bad read, the other I’ll have to really evaluate why this is unsafe to begin with and apply extra caution. That extra caution goes away when everything is unsafe. Kinda like a small PR will have 15 comments, a huge PR will have less. You won’t dedicate your all for every line of code, but if a line is tagged “unsafe” you sure as hell will.

    Nothing is a magic solution to everything, and unsafe array reads are just a simple example of fucky behavior. I’m not sure if it still stands, but even something like this was/is undefined in C++

    a[i] = i++;

    That is to me something you quickly can forget about. And it happens because of compiler optimization that happens even if it breaks the code, because normally the index in the array should be a different variable. Again, here is something that should have obviously stopped me. If the compiler still follows through (I’m sure there warnings for it) then it’s just letting me do an error no one should be allowed to. There is no reason this should ever compile.

    If we cna do that for everything at the language level, it’s a win in my book.


  • I’m just a noob when it comes to low level languages, having only been in C# and python. But I took a course on C++ and encountered something that didn’t seem right. And I asked and got the “that’s undefined behavior”. And that didn’t quite sit tight with me. We don’t know what will happen? It’ll probably crash? Or worse? How can one not know how a programming language will perform? I felt it was wrong.

    Now, it’s quite some time since that happened, and I understand why it’s undefined. But I still do not think it should be allowed by default. C and C++ both are “free to do as you want” languages, but I don’t think a language should let you do something that’s undefined especially if you aren’t aware you’re doing it. Everyone makes mistakes, even stupid ones. If we can make a place where undefined behavior simply won’t happen, why not go there? If you need some special tricks, you can always drop the guard where you need it. I guess I’m just reiterating the article here though. But that’s the point for me, if something can enforce “defined behavior” by default then I’d want that.