this post was submitted on 24 Jun 2024
437 points (98.0% liked)

Asklemmy

43907 readers
1016 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Lightor@lemmy.world 1 points 4 months ago (1 children)

False. Have a 70% up time and let me know how many clients you have left.

[–] Modern_medicine_isnt@lemmy.world 0 points 4 months ago (1 children)

Uptime isn't quality. Perf and reliability are easily faked with the right metrics. It's trival to be considered working on PowerPoint without working well for the user

[–] Lightor@lemmy.world 1 points 4 months ago* (last edited 4 months ago) (1 children)

Uptime indicates reliability. Reliability is a factor of quality. A quality product has a high uptime. What good is a solution that doesn't work 20% of the time? That's exactly how you lose clients. Why do SLAs cover topics like five 9s uptime if they don't matter and can be faked? This makes no sense.

You said quality doesn't matter, only features. Ok, what happens when those features only work 10% of the time? It doesn't matter as long as it has the feature? This is nonsense. I mean why does QA even exist then, what is the point of wasting spend on a team that only worries about quality, they are literally called Quality Assurance. Why do companies have those if quality doesn't matter, why not just hire more eng to pump out features. Again, this makes no sense. Anyone who works in software would know the role of QA and why it's important. You claim to work in tech, but seem to not understand the value of QA which makes me suspicious, that or you've just been a frontline dev and never had to worry about these aspects of management and the entire SDLC. I mean why is tracking defects a norm in software dev if quality doesn't matter? Your whole stance just makes no sense.

It’s trival to be considered working on PowerPoint without working well for the user

No it's not trival. What if "not working well" means you can't save or type? Not working well means not working as intended, which means it does not satisfy the need that it was built to fill. You can have the feature to save, but if it only works half the time then according to you that's fine. You might lose your work, but the feature is there, who cares about the quality of the feature.... If it only saves sometimes or corrupts your file, those are just quality issues that no one cares about, they are "trivial?"

[–] Modern_medicine_isnt@lemmy.world 1 points 4 months ago (1 children)

See, you just set the bar so low. Being able to save isn't working well, it's just working. And I have held the title of QA in the past. It is in part how I know these things. And in the last 5 years or so, companies have been laying off QAs and telling devs to do the job. Real QA is hard. If it really mattered you would have multiple QA people per dev. But the ratio is always the other way. A QA can't test the new feature and make sure ALL the old ones still work at the rate a dev can turn out code. Even keeping up on features 1 to 1 would be really challenging. We have automation to try and keep up with the old features, but that needs to be maintained as well. QA is always a case of good enough. And just like at Boeing, managment will discourage QAs from reporting everything they find that is wrong. Because they don't want a paper trail of them closing the ticket as won't be fixed. I've been to QA conferences and listened to plenty of seasoned QAs talk about the art of knowing what to report and what not to. And how to focus effort on what management will actually ok to get fixed. It's a whole art for a reason. I was encouraged to shift out of that profession because my skills would get much better pay, and more stable jobs, in dev ops. And my job is sufficiently obscure to most management that I can actually care about the users of what I write more. But also I get to see more metrics that show how the software fails it's users while still selling. I have even been asked to produce metrics that would misrepresent the how well the software works for use in upper level meetings. And I have heard many others say the same. Some have said that is even a requirement to be a principle engineer in bigger companies. Which is why I won't take those jobs. The "good enough" I am witness/part of is bad enough, I don't want to increase it anymore.

[–] Lightor@lemmy.world 1 points 4 months ago (1 children)

I'm setting a new low sure, and you're moving the goal posts. What "well" means is incredibly subjective.

You worked in QA, cool, and I've manage the entire R&D org of a nation wide company, including all of QA.

Your saying that since companies don't invest in it enough it doesn't matter at all? Why do they even invest at all then, if it truly doesn't matter.

Yes a QA can test old features and keep up with new ones. WTF, have you never heard of a regression test suite? And you worked in QA? ok. Maybe acknowledging AQA is an entire field might solve that already solved problem.

You did a whole lot of complaining and non relevant stories but never answered any questions I've been asking you across multiple comments...

[–] Modern_medicine_isnt@lemmy.world 1 points 4 months ago (1 children)

What goal post have I moved. My initial comment could have said work well for the user. But the second sentence implied that pretty clearly. And I am still saying it now. And great for you. You probably drank the kool-aid to get that position, so you feel the need to claim carry water for the illusion that upper management always try to project. I mean, you might be the exception, and truely believe in the things you say. Maybe you even work for one of the rare companies where it is true. But the vast majority of people working in the field that I have talked to have said that just isn't how it is most places. Many said it used to be, when their company was small... but that it changed.

And yes I wrote regression tests. And I worked hard to maintain them while writing tests on features. But with a 5 to 1 ratio of devs to QA, it wasn't possible to not cut corners. A year after I changed jobs I found out they had lowered the bar for releasing to 55% passing of the regression tests. I never had the tools to make them able to resist change as they had no one owning the automation tools. The next guy just didn't care as much. The job I moved to was qa automation so the qa's were my customers. I did my best there to give them automation that would reduce maintenance costs. But we weren't allowed to buy anything, we had to write it all. And back then open-source wasn't what it is today. So the story was the same, cut corners on testing. And of course the age old quote... "why is QA slowing down our release process". Not why are the devs writing poor code. The devs weren't bad either, but they were pressed to get features out fast.

As for why do they invest in it at all. Optics is a big part of it. But also to help maintain that low bar you spoke of. The moment industry trends started touting the Swiss army knife developer who could do it all including testing, they dropped qa teams like a bad habit. Presentations were given on how too much testing was bad, and less tests were better... that pendulum swings back and forth every decade or so. Because quality drops below the low bar, and the same exec who got a promotion for getting rid of the qa team at his last job 7 years ago, gets accolades for bringing it back in his new job.

[–] Lightor@lemmy.world 1 points 4 months ago (1 children)

The goal post being moved is what does "not working well" mean. If it's not working well I would consider it not working as designed. Otherwise I would just call it poorly designed. If something doesn't work as designed, then things like outages and data issues are a problem.

"You probably drank the kool-aid to get that position, so you feel the need to claim carry water for the illusion that upper management always try to project. "

Sure. Or I worked my ass off to get there and learned things along the way that you are not aware of. Things I have pointed out through our conversations.

"And yes I wrote regression tests. And I worked hard to maintain them while writing tests on features. But with a 5 to 1 ratio of devs to QA, it wasn’t possible to not cut corners. "

It is very possible. The standard is 3-5 QA per eng, that has been the standard for a while. Look up what the ratio should be. I don't know what you're expecting in order to keep up, a ratio of 1:2? But this is very common. We have a ratio of 1:4 with about 40 devs and my QA team keeps up without issue. If you are seeing issues it's usually due to a poor process or lack of skill by the team.

"And of course the age old quote… “why is QA slowing down our release process”."

You must work for a backward company. I've worked for about a dozen tech companies and not once, after explaining the need for QA did they ever say that. You explain what a Sev1 incident is or how a hack can impact the company and smart people listen. You may have worked for bad companies that put this taste in your mouth, but I have worked in some of the largest tech hubs (bay area, NY, SLC) and this is a huge exception not the rule.

"The moment industry trends started touting the Swiss army knife developer who could do it all including testing, they dropped qa teams like a bad habit. "

What, when was this? AGILE development is pretty much the standard, with SCRUm waterfall a second. In both cases you have dedicated QA with possibly devs writing unit tests. But is a massive antipattern to have a dev be the only one QAing their work, it always has been, always will be.

"Presentations were given on how too much testing was bad, and less tests were better… that pendulum swings back and forth every decade or so. "

Yah, except that pendulum swinging can cause events that tank entire companies. Any company worth it's salt would never fall into that trap because they know it could burn their investment to the ground in a heartbeat.

[–] Modern_medicine_isnt@lemmy.world 1 points 4 months ago (1 children)

The shift was around 2018 or so. It was talked about on qa forums and conferences. It likely was talked about at conferences that management go to, but I can't confirm that. It seems like you must work in a specialized industry. 1 QA to 4 devs is about the standard. And they keep up by cutting corners. The effort required to creat test automation to test a feature is on par with the effort to create the feature. And then you have to add in old tests that need to be maintained. No way one person can cover 4 and not cut corners.

The company that takes the risks gets the product out before those that don't. And the ones who get lucky not to have a major thing tank them win in the end. That is just how the system works.

[–] Lightor@lemmy.world 1 points 4 months ago (2 children)

No no no. They do not set the ratio like it is knowing they will purposefully do a bad job. Anyone with half a brain can explain to a board why that's a bad idea and the risk around it. Do you have any proof at all that companies knowingly build up a dept that will cost them a bunch of money and only prevent issues sometimes? No, that's absurd. I have seen first hand as a developer, a QA resource cover up to 5 people without issues. I have managed teams, still do, that have a 1:4 ratio and keep up. It can be done and has been, just because you've not experienced it doesn't make it impossible.

Yes, this company taking some risk like in a movie, beeting them to market. And what if their solution causes massive issues due to QA not catching some critical bugs due to cutting corners. They forever lose that entire market. They company could be out of business instead of being able to still compete in that space. No investor would risk money like that. Where are you getting these ideas from?

[–] Modern_medicine_isnt@lemmy.world 1 points 4 months ago (1 children)

Simple. Been a QA, worked with QAs, been to conferences with QAs. We tell the boss we can't cover the whole thing, they say just cover the most important stuff. The general advice from veteran QAs is to not even say that to the boss. They know, and they can't get more resources. So veteran QAs advise others to get a feel for how much time you can spend on a thing before they start complaining that you are holding it up. Then work within that timeframe. As long as nothing major gets through it's all good. Your view is one of survivor bias. Nothing big got through, but that doesn't mean it was completely tested. Its good enough, untill it isn't. Side note, I've seen product managers close bugs, not because they weren't bugs, but because they were bad enough, compared to features they thought would sell more software. This was an outlier, usually they just wait a few years and mass close everything that is X years old. That, I have personally seen everywhere I have worked.

[–] Lightor@lemmy.world 1 points 4 months ago* (last edited 4 months ago)

My view is survivor bias sure.

And you're view is based on anecdotes and assumptions. Things you've seen that you assume applies to the entire industry because you've worked in QA. Well I've worked at multiple companies, in roles from product to engineering, working myself up to the level of CTO. I talk to other CTOs and understand how their teams run and fail. I have to make decisions that keep our tech going and deal with consequences when they aren't. So forgive me if I don't put a ton of stock in your statement that "quality doesn't matter" when I've had multiple conversations with executives and multiple experiences that prove that to be false.

Bottom line is, I've told you my point of view, you disagree, that's fine. You don't work for me, I don't need to worry about it. If you truly think quality doesn't matter and that's working for you, have at it.

[–] Modern_medicine_isnt@lemmy.world 1 points 4 months ago (1 children)

https://www.abc.net.au/news/2024-07-19/technology-shutdown-abc-media-banks-institutions/104119960. They didn't even test the update before pushing it. And the company will still exist in a month. They will take a stock hit, might lose some customers, but in the end it will just be a blip.

[–] Lightor@lemmy.world 1 points 4 months ago* (last edited 4 months ago) (1 children)

Jesus dude, your still on this, I wrote off this convo forever ago.

This will destroy Crowdstrike. They will not exist in a year. This is not "just a blip" lol. Many companies have collapsed over much more than this.

You make so many assumptions. How do you know they didn't test it? How do you know a wrong build didn't go out? You're entire stance is based on assumptions fed by anecdotes from limited experience.

[–] Modern_medicine_isnt@lemmy.world 1 points 4 months ago

Remember solarwinds... that was supposed to destroy them to. Still here though.
You seemed surprised companies would risk a catastrophic bug because it would destroy them. But this will be the evidence that it won't. And yes, the most likely cause was that the copy process failed somewhere along the line. But testing hashes of the update against what you actually tested is part of qa. And clearly testing didn't happen somewhere critical.