Dude looks like Dexter DeShawn from Cyberpunk.
Dude looks like Dexter DeShawn from Cyberpunk.
I have a spreadsheet with items with their price and quantity bought. I want to include a discount with multiple tiers, based on how much items have been bought, and have a small table where I can define quantity and a discount that applies to that quantity. Which Excel functions should I use?
Response:
You can achieve this in Excel using the VLOOKUP or INDEX-MATCH functions along with the IF function.
Create a table with quantity and corresponding discounts.
Use VLOOKUP or INDEX-MATCH to find the discount based on the quantity in your main table.
Use IF to apply different discounts based on quantity tiers.
I mean, GPL guarantees code remains open and free. If they release an app based on the source code licensed under GPL, they have to give a source code along with essential build instructions to anyone who is using it, and then you can do anything with that code, including sharing, compiling, and distributing that app, provided it’s under GPL license.
Edit: I see it’s licensed under GPL 3.0, so no worries.
General rule of thumb: Comments say why is it here, not what it does. Code itself should describe what it does.
Totally agree on that. When the first generation RTX cards launched, I was pretty sure that it was just going to be another gimmick like PhysX, but today, it’s an inevitable future.
I might be wrong, but even for games like Cyberpunk 2077 there is a finite set of world states that define lighting conditions (time of day weather etc.).
So prebaking lighting information for all these combinations and then figuring out a way to create transitions between them would maybe not be the perfect representation, but best of both worlds.
However, given how fast RayTracing improves hardware-wise, in my opinion it would make no sense to even consider researching and developing a solution of that kind.
Real time RT really is meh, but I like what they’re doing in CS2. Prebaked Global Illumination looks freaking fantastic.
The difference between different generations of USB-A are speeds. If user notices differences in speeds, they are way more likely to know the difference between USB versions.
The differences between USB-C and USB-A are capabilities. USB-C is already confusing for many people. My boss (IT Project Manager) thought he could use USB-C to connect his monitor, while he couldn’t because his laptop doesn’t support DisplayPort over USB-C.
There is already a huge mess with USB-C capabilities. Some of them are just glorified USB-A ports, some of them have DisplayPort over USB-C, some of them are Thunderbolt (with different versions or course), some of them are QC (with different versions - once again).
I can just imagine the confusion from users, who expect all of the USB-C ports in the motherboard to work the same way, but then only one or two ports from 8 total have DisplayPort capabilities.
“If it doesn’t fit it means it’s not supposed to go here” is a great way to tell the user what capabilities the port has.
I disagree.
More technical people would understand, but your average Joe would try to plug in their external monitor and RMA PC because it’s not working, same with slow charging phone speed etc.
I’m honestly all in for keeping USB-A for basic I/O devices. Although inventing an USB-A female connector that works both sides and is backwards compatible would be neat.
English is not my first not language. When I write something down in my first language (polish), it feels more like I’m transcribing things I silently say to myself, while with english I’m actually thinking about every word I type.
The funny thing is, the better I am getting at English, making those types of mistakes is getting easier for me.
But idk, this is just my experience.
Because they learned that from hearing, not reading so that makes sense.
The apps still need to request OS for specific permissions before they use things like GPS, mobile data, filesystem etc.
But the point you’re missing is unless you’re building everything yourself, there is always a party that you have to trust. Apple likes to paint itself as trustworthy when it comes to your data, but all the anti-consumer shenanigans they do when it comes to hardware clearly state that the only thing they care about is money.
Remember - it’s either convenience with a false sense of security or security. Never both.
Um… what?
If you drive a car, your car needs a license plate. That plate is tied to you. If you commit a crime you’re likely to get reported. Also you can be randomly stopped by police and they will check if you have a driver’s license.
For firearms, as far as I’m concerned, the ammunition has some sort of serial number, which in case of committing a crime, would allow the police to track you by contacting people who sold it to you.
With printer, how then fuck is it going to change anything? Not to mention you can actually quite easily build it yourself.
You just need to realize that Adobe doesn’t release their stuff on Linux, not because it doesn’t allow them to, but Linux desktop market share is too small.
It’s a chicken and egg problem. Once Adobe would release their stuff, magically there would be a massive movement to improve HDR support, color accuracy etc.
And you need to realize Microsoft achieved such a giant market share thanks to illegal monopolistic practices in 90s, that still have huge impacts today.
I can’t find any reason why someone would still use rar in 2023. When I see anyone using it, it means to me they’re as technologically literate as my grandpa.
I already had a huge respect for him for being really pragmatic, but after this post my respect for him climbed a level above I thought wasn’t even possible.
And that’s without considering the guy basically made software that runs >99% of modern internet.
He also voiced Vesemir from Witcher. He was also very popular voice actor in animated movies localizations. Genuinely he was one of the few voice actors I knew by name, and the news that he died really struck me.
Ok, if I remember correctly, YouTube barely generates, but generates nonetheless revenue for Google. There are many ways to make more money without fucking over its users by cutting costs:
downgrade old videos with small watch count to 720p30
make people pay for hosting >1080p60 content
do not allow private/unlisted videos
straight up remove 10h looped videos - they take so much space, but are technically spam - both for bandwidth and storage
And my go-to solution: focus on sponsorships as main source of revenue. They are the only ads I can tolerate and are actually effective from my experience. YouTube can just take a cut from every sponsorship on YouTube video and everyone will be happy.
Chromium has tons of eyes on it, because it’s codebase for many other projects, such as Electron and any chromium based browser.
Web integrity wasn’t discovered through chromium source code, but it was openly proposed by Google on separate Github repo, dedicated solely for that proposal.
There are many shortcuts in your thinking that just the code being open makes it trustworthy. Every PowerShell malware technically has its code open, because it’s a script. But you wouldn’t open a random script from the internet, without checking what it does, yet you don’t apply the same logic to Brave. If you don’t check the source code yourself, you either need to trust an author, or third parties that “checked” the code.
In addition to that, you’re probably using compiled binary, which means at this point you can throw that source code out from window, because at this point you can’t be sure compiled binary == source code.
Due to the enormous amount of code, it’s really easy to obfuscate malicious behavior. At the scale of the browser it’s more efficient tracking outbound packets that program sends than examine source code.
Java used to lack many features to make the stuff you wanted it to do, so most Java programmers adapted design patterns to solve these problems.
Honestly, older versions of Java are utter garbage DX. The only reason it got so popular was because of aggressive enterprise marketing and it worked. How can a language lack such an essential feature as default parameters?
So, anyway after the great hype Java lost its marketshare, and developers were forced to learn another technologies. And of course, instead of looking for language-native way of solving problems, they just used same design patterns.
And thus MoveAdapterStrategyFactoryFactories were in places where simple lambda function would do the same thing, just not abstracted away three layers above. Obviously used once in the entire codebase.
Imo the only really good thing about Java was JVM, while it was not perfect, it actually delivered what it promised.