100% noob question : what's the use for these decompilations ?
If you ask the makers of such things then "for intellectual curiosity about the ways the games work", for everybody else
You have source code for the games that will in the case of a lot of these N64 ones be commented, with nice variable name, split into files roughly related to the function.
This allows
People to compile versions for the original console.
Being a high level language this can mean quite radical changes to the game that would take a conventional ROM hacker some considerable skills (which not all have -- learning assembly coding which is what you will need is the final boss of learning ROM hacking) and time, possibly even being done by people with far less skill; C is still taught in a lot of places and it is not such a great leap from higher level languages. Some things may even be changed that would be not impossible but effectively so even for said skilled ROM hacker*.
Better than that with the comments, variable names, file names and such a comparatively unskilled person that followed a guide to get the compiler installed and set up can read and think "this gravity variable says 3, what if I put 6 in there instead" before pressing compile. Depending upon the game you either have Jupiter gravity or moon jump mod for changing a text file and pressing compile which is a not inconsiderable hack outside of games with double jump/jump in the air (there you can make a cheat to hold the "has double jumped" flag as not jumped and thus into the sky you go, though it is a subtly different behaviour to moon gravity as this mod would likely play out).
As you have code there it will tell you how it handles certain data types within the game -- a decompilation won't inherently give you a level format specification (unless the makers of said decompilation were nice and made one) but it will give you the next best thing.
When a PC port gets made, or the code gets made more portable (there will be a fair few conventions done in the baseline code that only make sense if you are operating within the N64's limitations and presumably not going to get a quick PC port or port to the PS1 and thus can lean more into them rather than writing the generic case for controls, graphics, music pipelines and whatnot), then it is off to the races.
Draw distance and possibly fog used on the N64 version? You are on a modern PC my friend which could render the whole level 10 times over on top of playing the latest system grinder of a game without it making a difference. Gone/at effectively infinity.
Widescreen. Yeah it is a simple camera tweak, rare to see on baseline consoles (the gamecube stuff on wii being one of the main exceptions) but fine, said kid doing a gravity hack could probably pull off the basic version too.
High resolution. Of course, simple tweak to the render pipeline and maybe overlays done as graphical effects.
Filtering (anti aliasing, shadows... the sky is the limit, or indeed the sky might even be changed considerably if the baseline texture is found/AI upscaled**). Fill yer boots.
Frame rates. You are on a modern PC remember, you might even have to tame things down that were coded to the baseline 20fps or max 60 the original console had.
I like to edit levels but find the size restrictions to fit in N64 memory awfully troubling. You have 32 gigs of RAM what are you on about -- those endless stairs in Mario 64 might not be a coding trick with shepard tones as an amusing extra but a model thousands of units longer than the biggest levels, or something actually more interesting.
Same for enemy counts. Could also alter the AI to be something radically more complex, possibly even one of those GPT things for towns NPC dialogue....
Music hacking can be more involved on baseline games, who cares make it play a 7.1 audio file of your choice. Same if you want a fancy cut scene to play a 4k HEVC video.
Co-op modes were mentioned above. Depending upon the gameplay style you might be able to do a sonic and tails approach where one player is teleported back to the main and keep a single screen render, or done over a network/cross talk emulator such that the host game basically sees another NPC (client game getting copies of the game world) which obviously precludes real hardware or not specially jigged emulators, save maybe for serious network hacks if there is network hardware available (don't think there was for the N64, gamecube broadband adapter might be better).
Replacing AI controlled characters with players and firing data around is also how things like 8 player mario kart are accomplished. Not a problem rendering 8 windows here, other than maybe lack of screen to display it on so guess maybe code it for a network instead.
This could go on for a while but hopefully you can see the potential. Oh yeah and this applies to all things that might run it instead of hoping your system has the power (and battery life) to deal with an emulator, which tradition pegs at 10x power of the original system if it is not similar enough to shortcut (nothing is similar to the N64, certainly not anything modern) though dynarec (more on that below) drops that a tiny bit.
*so the other day I watched
In it he details some of the various aspects of the 3d models (way more involved than the basic X-Y-Z coordinates of a naive 3d model as the devs sought to wring every last drop of performance from things and hardware manufacturers took shortcuts that work) but also in the end the limitations of having to match original model counts. Assuming you are not running into hardware limits that is a trivial change (evening/weekend project for the first, second probably down to hour at most) for a vaguely good coder, a years long possibly nightmare project that will struggle to be compatible with other hacks to a very skilled ROM hacker (of which there are possibly 20 in the world that know the powerpc of the gamecube, bumping maybe to 100 if you took those that could cross skill, few more still for those that might learn from scratch off their own back -- no way that gets taught at universities and indeed such an education often makes life harder).
**choice video at times like these
How come it's always N64 games we see decompiled? No playstation games? Or Saturn?
I assume if that gen takes 3 years to decompile then PS2/cube/Xbox would be like 5+?
Anything 16 bit or older is going to be assembly, give or take BASIC for those devices based on it and other interpreted languages you might see (usually running in SCUMMVM, see stuff like ZZT), which obviously does not compile. Instead you will see disassemblies made in a similar fashion to most of these with comments, split names, nice variable names as opposed to memory locations
These disassemblies are far less versatile compared to what I was on about above.
Assembly code is a pain to deal with (you have to be a very good coder, it makes porting things between different systems a nightmare if they are radically different like many would be, it makes changing code annoying, it likely means your graphics and music people have to know coding...) so when it became viable to ditch it, the PS1 on up being a massive selling point for this, people did.
C was usually the first thing to go to (in some circles it being dubbed portable assembler but we will skip that one for today).
Trouble is C has its limitations, especially as it pertains to games (see object oriented programming, and if you understand that already consider doing a FPS in procedural language conventions vs object oriented), so that was ditched a generation later in most instances (legacy code being the main exception if someone was making a port of an older game) in favour of usually C++.
We are now going to have to make a quick aside into the magic of decompilation. Classical computing holds that decompilation is impossible for complicated code, if you learn computer science then one of the basics you might be taught is the halting problem (along with Mr Turing's other great contributions) that roughly state if you are waiting on some condition to arise you likely will not be able to start at one end of the code and end up at the other by simply running through it as a flow chart type thing when it comes to various choices. As games are potentially literally defined as a series of interesting choices then this halting upon encountering a choice becomes a real problem.
However reality is that actually most choices are boring. In say a mario platformer I might fall in a pit and die, might lose a life a poison mushroom, might run out of time, might be squashed, might be hit by an enemy... but the reality is there you exit the interesting code and return to the start (or some kind of start anyway) so most choices are in fact boring and instead choose life. Do this enough and you can see most of the game, most of the functions.
Further to this is dynamic recompilation aka dynarec in many emulation circles. Still sits a bit aside from static decompilation which is what a lot of this does. It reasoned (rightly) that if the baseline code was C that certain operations rather than imagining a whole complicated N64 CPU and associated hardware that if you could figure out what the equivalent C code was (and most things are basic operations like add this to this, multiply or divide) that you could turn it into code that the host system could run, and you have a lot of info on the running program if you are emulating it. This host system code is then much much much faster than emulating said CPU do that and move to the next thing to see if you can figure it out (which oh look at that is another recognisable piece of code). Hence why you had N64 emulation on PCs that were nowhere near the 10x the power of the emulated system that classical emulation holds you kind of need to do anything sensible (sensible also up for debate if you dare broach the SNES9x vs ZSNES flame wars) or far faster still if you want to be down with the transistors (see also why FPGAs are popular right now for high end emulation and device recreation).
https://arstechnica.com/gaming/2011...-3ghz-quest-to-build-a-perfect-snes-emulator/ for a minor jumping off point, have a few others in the intro to
https://gbatemp.net/review/analogue-pocket-gb-gbc-and-gba-handheld-fpga-based-player.2081/ if you want my usual list.
https://gbatemp.net/threads/can-i-aka-you-as-i-cant-wont-code-port-this-game-to-this-device.576997/ being another with some other things and further discussions.
Static decompilation also rose up the ranks of research projects into something more practical. Was noted during that that actually there are only a handful of compilers anybody uses (even more so embedded systems like the consoles), and a few standard libraries on top of that, fire enough code through them (for the coders among you can easily make a simple grep-sed-awk type approach to get examples of every number, data type and more having every mathematical operation, compare and whatnot done on it as well as the standard library, plus any leaked/released libraries you might have found, to train the machine with) and you have a fair example to both compare against and train an AI/play machine learning on recognising the patterns of. Certainly enough to pick out some constructions and leave humans to figure out a rather smaller problem or more nuanced one.
Said problem is where the next limitation comes in. With procedural C the problem space as the mathematicians and the computer science side of them would have you know it is reasonably achievable with high end modern computing for following along with the fairly straight forward "run through this until aka WHILE /FOR this list/IF this ELSE". With object oriented then this problem space multiplies massively as you now have many different things all speaking to one thing, branching off every which way depending upon all manner of more unpredictable things. Or an analogy might be C decompilation is like playing a choose your own adventure keeping fingers on earlier pages to roll back, C++ is more playing a game and taking a savestate every frame to roll back and keeping track of all that.
Anything newer is going to be a leak, an official release of said code (happens from time to time, though in the days of Steam long tail sales for PC games that would have dropped off or never even made it to those spinning racks of cheap games in days of old this has changed the nature of things -- most only do it when dead or hoping the community ports/modernises the code such that the original can be bought for the art assets by nobody has to pay expensive devs), possibly a retracted official release (have been a few of those, usually because they used the rad/bink video format and that caused troubles), because it was written in a higher level language still that is more amenable to decompilation (Java, C#/.net, lua, python...) or is the more traditional outright recreation from gameplay observations (and possibly limited code prodding depending upon how much the makers of such things feel like bruising the law) or porting between engines that might be more known (Unreal and closed or on console to Unreal with released code/on PC sort of thing).
Why N64? For now it does still involve a lot of manual work (can't just grab
https://hex-rays.com/decompiler/ or
https://github.com/NationalSecurityAgency/ghidra and expect something useful to come out of it) and it seems the N64 has attracted those with the skills***. The PS1 version of Diablo did help the PC decompilation effort but that was more of an incidental info transfer than outright decompilation of it. Plenty of PC games are also seeing decompilation efforts but those get reported on around here rather less, not to mention most of those seem to be going for playable (possibly with assets) rather than the somewhat bizarre fixation on 1:1 that the N64 seems to be experiencing.
Also while I said a generation later then I would note C was being used for PC games rather before it became viable on consoles (
https://en.wikipedia.org/wiki/ANSI_C has some of the timelines involved, the 89 part of C89 referring to the year. By the way if you know the original/draft C, sometimes called K and R C, it is quite different and actually quite the lucrative skill.) so PC games will have C games arise earlier and dip out for C++ earlier still (rule of thumb probably being minus a generation from the console equivalents for both jumps, though there are plenty of exceptions and quirks like inline assembly and all the weird optimisation and packing formats making PC world that much more annoying, though equally DLL labels are lovely things.
https://www.dependencywalker.com/ if you wanted a go on something there.) and handhelds probably one or even two the other way (the GBA and DS being C with inline assembly, PSP bit more C++ probably, GBC being assembly with some helpers
https://www.pagetable.com/?p=28 , not sure what goes for the 3ds but probably some C++ by that point but I did also see fully written in assembly claims).
***I have not looked into the backgrounds of those doing the deed for the N64 stuff (a lot of it being somewhat anonymous as well, possibly for fear of legal issues both from Nintendo and working on things in the future with such things attributed to them) but if they are also those working on the emulators and plugins in years past then they are probably versed enough in such things to accelerate the process considerably. This possibly in addition to the nostalgia factor -- while the N64 was a failure it does have some quite iconic games (most of which you will see being decompiled) that people still wax nostalgic about where fewer for the PS1.
Equally there is a need to train the decompilers on things, which actually means the N64 stuff is even more of an aberration, so obscure embedded CPUs will lose out to the far sexier PC and Windows X86 and stuff there.
https://osgameclones.com/ https://en.wikipedia.org/wiki/List_of_commercial_video_games_with_available_source_code for some links to possibly be a jumping off point on the PC side of things. A lot of PC stuff is also tied up in mod making, with decompilation helping more to direct hacks and understand formats rather than outright recreation.