LSD: Dream Emulator must also rank up there, surely? :)
Gorogoa has been on my “pile of shame” for several years now. Perhaps today’s the day.
LSD: Dream Emulator must also rank up there, surely? :)
Gorogoa has been on my “pile of shame” for several years now. Perhaps today’s the day.
The morality system was a huge disappointment for me. You said most of what I wanted to say, so I’ll be brief.
Right near the start of the game, an NPC outlines the Way of the Open Palm vs. the Way of the Closed Fist, more or less the same way you described them. And I was so excited to see a morality system in which both sides were morally defensible positions. But from the very first Closed Fist follower you meet (just minutes later), they may as well all be monacle-wearing moustache-twirlers who punctuate every sentence with “mwah-ha-ha!”
The worst example that I remember is a bootlegger who’s essentially holding a town hostage. Far from following either philosophy as described, he’s just plain evil, and in fact I easily came up with (IMO solid) arguments for actually swapping the game’s morality labels on the player’s options. But no, one option is clearly “evil”, so that’s Closed Palm, while the other is obviously “good”, hence Open Palm.
Whoops! When I looked at the second time that the shift value is calculated, I wondered if it would be inverted from the first time, but for some reason I decided that it wouldn’t be. But looking at it again it’s clear now that (1 - i) = (-i + 1) = ((~i + 1) + 1), making bit 0 the inverse. Then I wondered why there wasn’t more corruption and realized that the author’s compiler must perform postfix increments and decrements immediately after the variable is used, so the initial shift is also inverted. That’s why the character pairs are flipped, but they still decode correctly otherwise. I hope this version works better:
long main () {
char output;
unsigned char shift;
long temp;
if (i < 152) {
shift = (~i & 1) * 7;
temp = b[i >> 1] >> shift;
i++;
output = (char)(64 & temp);
output += (char)((n >> (temp & 63)) & main());
printf("%c", output);
}
return 63;
}
EDIT: I just got a chance to compile it and it does work.
I first learned about Java in the late 90s and it sounded fantastic. “Write once, run anywhere!” Great!
After I got past “Hello world!” and other simple text output tutorials, things took a turn for the worse. It seemed like if you wanted to do just about anything beyond producing text output with compile-time data (e.g. graphics, sound, file access), you needed to figure out what platform and which edition/version of Java your program was being run on, so you could import the right libraries and call the right functions with the right parameters. I guess that technically this was still “write once, run anywhere”.
After that, I learned just enough Java to squeak past a university project that required it, then promptly forgot all of it.
I feel like Sun was trying to hit multiple moving targets at the same time, and failing to land a solid hit on any of them. They were laser-focused on portable binaries, but without standardized storage or multimedia APIs at a time when even low-powered devices were starting to come with those capabilities. I presume that things are better now, but I’ve never been tempted to have another look. Even just trying to get my machines set up to run other people’s Java programs has been enough to keep me away.
I don’t know if this will work or even compile, but I feel like I’m pretty close.
long main () {
char output;
unsigned char shift;
long temp;
if (i < 152) {
shift = (i & 1) * 7;
temp = b[i >> 1] >> shift;
i++;
output = (char)(64 & temp);
output += (char)((n >> (temp & 63)) & main());
printf("%c", output);
}
return 63;
}
It’s not a really big thing, but it is a pet peeve of mine (and some others); the name of the series isn’t “Dues Ex” but “Deus Ex” (day-us ex), as in “deus ex machina” (day-us ex mack-in-a).
“Deus ex machina” literally translates as “God from (the) machine”, and originally referred to a type of stage prop used in ancient plays, then in more modern times the term came to refer more generally to the sort of plot device that used that prop, which is a previously unmentioned person or thing that suddenly appears to save the heroes from an otherwise inescapable threat. At some time in the 60s or 70s it started to become popular to use it in a more literal sense in sci-fi stories about machine intelligence or cyborgs.
I tought myself programming as a kid in the 80s and 90s, and just got used to diagnostic print statements because it was the first thing that occurred to me and I had no (advanced) books, mentors, teachers, or Internet to tell me any different.
Then in university one of my lecturers insisted that diagnostic prints are completely unreliable and that we must always use a debugger. He may have overstated the case, but I saw that he had a point when I started working on the university’s time-sharing mainframe systems and found my work constantly being preempted and moved around in memory in the middle of critical sections. Diagnostic prints would disappear, or worse, appear where, in theory, they shouldn’t be able to, and they would come and go like a restless summer breeze. But for as much as that lecturer banged on about debuggers, he hardly taught us anything about how to use them, and they confused the hell out of me, so I made it through the rest of my degree without using debuggers except for one part of one subject (the “learn about debuggers” part).
Over 20 years later, after a little professional work and a lot of personal projects and making things for other non-coding jobs I’ve had, I still haven’t really used debuggers much. But lately I’ve been forcing myself to use them sometimes, partly to help me pick apart quirks in external libraries that I’m linking, and partly because I’d like to start using superscalar instructions and threading in my programs, and I remember how that sort of thing screwed up my diagnostic prints in university.
Of course! There’s already a proposal for a replacement Temporal object.
The definition of the Date object explicitly states that any attempt to set the internal timestamp to a value outside of the maximum range must result in it being set to “NaN”. If there’s an implementation out there that doesn’t do that, then the issue is with that implementation, not the standard.
There are several reasons that people may prefer physical games, but I want people to stop propagating the false relationship of “physical copy = keep forever, digital copy = can be taken away by a publisher’s whim”. Most modern physical copies of games are glorified digital download keys. Sometimes, the games can’t even run without downloading and installing suspiciously large day 0 “patches”. When (not if) those services are shut down, you will no longer be able to play your “physical” game.
Meanwhile GOG, itch, even Steam (to an extent), and other services have shown that you can offer a successful, fully digital download experience without locking the customer into DRM.
I keep local copies of my DRM-free game purchases, just in case something happens to the cloud. As long as they don’t get damaged, those copies will continue to install and run on any compatible computer until the heat death of the universe, Internet connection or no, just like an old PS1 game disc. So it is possible to have the convenience of digital downloads paired with the permanence that physical copies used to provide. It’s not an either-or choice at all, and I’m sick of hearing people saying that it is.
It really depends on your expectations. Once you clarified that you meant parity with current consoles, I understood why you wrote what you did.
I’m almost the exact opposite of the PC princesses who can say with a straight face that running a new AAA release at anything less than high settings at 4K/120fps is “unplayable”. I stopped watching/reading a lot of PC gaming content online because it kept making me feel bad about my system even though I’m very happy with its performance.
Like a lot of patient gamers, I’m also an older gamer, and I grew up with NES, C64, and ancient DOS games. I’m satisfied with medium settings at 1080/60fps, and anything more is gravy to me. I don’t even own a 4K display. I’m happy to play on low settings at 720/30fps if the actual game is good. The parts in my system range from 13 to 5 years old, much of it bought secondhand.
The advantage of this compared to a console is that I can still try to run any PC game on my system, and I might be satisfied with the result; no-one can play a PS5 game on a PS3.
Starfield is the first game to be released that (looking at online performance videos) I consider probably not being worth trying to play on my setup. It’ll run, but the performance will be miserable. If I was really keen to play it I might try to put up with it, but fortunately I’m not.
You could build a similar system to mine from secondhand parts for dirt cheap (under US$300, possibly even under US$200) although these days the price/performance sweet spot would be a few years newer.
My Hero isn’t actually Irish, but it does star Ardal O’Hanlon (Father Dougal) for almost its entire run, so it may scratch the itch.
I can’t respond directly because I haven’t played either Metroid Dread or Hollow Knight specifically, although I’ve played and enjoyed many other metroidvania games, including the majority of the Metroid series (I even enjoyed Metroid Other M… mostly). But I’ll say that there’s no rule that prevents metroidvanias from being entertaining until you unlock some specific part of the ability set. The search to unlock new abilities should be fun itself.
I think that we mostly agree. My contention is that pretty much the entire game should still be engaging to play; having a long total play time shouldn’t excuse that, and a shorter play time simply doesn’t allow for it. Plenty of games have shown that it’s possible to gradually layer mechanics one or two at a time, creating experiences around those smaller subsets of abilities that are still entertaining. I work in education and this idea is vital to what I do. Asking students to sit down and listen quietly as I feed them a mountain of boring details while promising, “Soon you’ll know enough to do something interesting, just a little longer,” is a sure-fire recipe for losing my audience.
And as I think you may have intimated, creating environments that require the use of only one ability at a time reduces those abilities to a boring list. When you’ve finally taught the player each ability in isolation, and suddenly start mixing everything up once they get to the “good part” of the game, they’ll virtually have to “relearn” everything anyway.
We don’t need to give the player everything at once to make our games interesting, but we do need to make sure that what we’re giving them piecemeal is interesting in the moment.
This isn’t a slight against you, OP, or this game, but I’m just suddenly struck by the way that, “aside from the first few hours,” or more commonly, “it gets better a couple of hours in,” has become a fairly common and even somewhat acceptable thing to say in support of a game, as part of a recommendation.
As I get older I’m finding that I actually want my games to have a length more akin to a movie or miniseries. If a game hasn’t shown me something worthwhile within an hour or so, I’m probably quitting it and never coming back.
No argument from me; the management was chaos at that place. Those kinds of mistakes were beyond my control, but fortunately they were rare.
We were managing our own work with (usually generous) milestones/deadlines determined by other people. As long as we kept meeting goals, no-one looked any deeper. It gave me the freedom to literally put everything else on hold and switch 100% of my attention to this project.
I think I was kinda in the same boat as you.
In theory, I loved the fact that if you wanted to check, the game would tell you when you theoretically had enough information to identify one of the crew or passengers, so you knew where to focus your thinking. But I got stuck on some characters who seemed to me to be implied or hinted, but for whom I didn’t think I had positive proof.
I eventually got tired of continuously reviewing the same scenes over and over, looking for some detail that I had overlooked, and read a walkthrough to find out what I was missing. It seems that I hadn’t missed anything, and “an educated guess” was the standard expected by the game, not “definitive proof”. But I was burnt out with the game by that point and stopped playing.