

We’d better keep an eye out for them in the future.
We’d better keep an eye out for them in the future.
What ShinkanTrain said. The last a read about it, the PS2 only switches into PS1 mode on a trigger from the optical drive subsystem, and then most of the memory and other hardware used to run homebrew is deactivated. AFAIK no-one’s yet found a way to trigger the change in software and keep the connection to wherever you’re loading your game from.
I believe that on certain revisions of the console, MechaPwn can overcome the protection, but you still need a “Playstation 1” CD in the drive to actually run something, as ShinkanTrain wrote.
Probably because it’s pretty slow, and the custom drive format used by the PS2 isn’t very flexible; game images have to be in one continuous block, and blocks can’t be moved. You can overwrite one game with another, but only if it’s the same size or smaller. So if you delete games off in the reverse of the order you put them on you’re fine, but otherwise you’re going to leave empty “holes” of wasted space.
Having tried both a USB3 drive adapter and downloading over Ethernet, I’ll say that Ethernet was way slower for me.
The average copy time on the adapter was about 30 minutes, but over Ethernet it took 3-4 times as long.
I have a 2TB SATA HDD in my PS2 fat. AFAIK that’s still the maximum storage size possible with the FMCB/wLaunchELF software. I believe that an unmodded original network adapter should be able to take up to a 512MB IDE drive, but I’d have to double-check that.
I used to use a third-party “network adapter” (they usually don’t have Ethernet, just an HDD connector) with SATA support, which still works fine (it seems like most brands stopped working properly after a certain homebrew software version), but later I bought an official adapter (IDE/PATA) and a SATA conversion kit (a kit specific to the PS2 network adapter, not a standard IDE-SATA converter, which sometimes work with the PS2 and sometimes don’t) so I could try network stuff.
I don’t think it was worth it, but these days it’s probably the way to go since there no longer seems to be any way of telling the non-working aftermarket adaptors from the working ones; the companies making the bad ones just started putting the brand name of the one still working adapter on their products.
If you have a relatively powerful computer or phone, and your library only contains games from the console’s top 100 or so, you’re probably right.
You can run games directly over Ethernet, which I believe can run at close to full speed. Some people have made dedicated little server devices for this out of cheap single-board computers like a Raspberry Pi; I think one guy may even have been selling a finished product like this for a while.
And to be really picky, the first version of the slim actually has IDE HDD support onboard like the original network adapter, just no physical connector (you have to solder one on yourself).
I use 10ten (previously Rikuchamp) for Japanese. I don’t think it does full translation, but it gives thorough dictionary lookups (from WWWJDIC) as mouseover tooltips. Very useful if you’re trying to learn the language, but maybe not so much if you just want to read stuff quickly. I think it’s now available for every major browser, but I mostly use it on FF.
Almost everyone in this thread is talking about wannabe tryhards, investment, and reliable watches, but for people who are interested in the pictured watch, they’re real, but pretty cheap and flimsy; I have a couple.
They came out in gashapon machines a couple of years ago, although I got mine just last year for (IIRC) ¥500 a pop. It’s a series by Takara-Tomy with two models of Saturn (black and white), and two models of PS1 (PSX and PSOne).
https://dlmag.com/playstation-1-and-sega-saturn-themed-watches-for-classic-game-fans/
https://www.piggygaga.com/shop/gashapon-sega-saturn-playstation-vs-watch-collection/
A few years ago I had a software problem, and in the course of trying to solve it I found someone with almost the identical problem on SO, although no-one had posted a solution. Later on, I managed to piece some facts together and come up with a solution that worked for me. Trying to make life easier for others having the same problem, I posted my solution to that SO question, along with a brief explanation of what I thought the underlying problem was, and how my solution addressed it.
I got several upvotes, and one or two comments from people saying it worked for them too, which was nice. There was also a post from someone it didn’t work for, and they outlined why they thought that might be, which was constructive.
Unfortunately there was also some salty grump who weighed in just to tell me that my solution wasn’t “correct”. Not that it didn’t work mind you, just that it wasn’t good enough for them. As far as I bothered to look into their vague comments, my solution may have fixed the issue more as a side-effect than directly, but it did fix the issue. Meanwhile this person offered no alternative instructions of their own.
As time goes on, I seem to run across this sort of – not just unhelpful but “anti-helpful” – attitude more and more often on SO.
I apologize, because between OP’s post and looking at the OnlyOffice website, I got the impression that it was only a web app, requiring a web server to run. After reading another comment here I looked harder on the website and found the download links for the standalone versions.
Where are these conversations happening? I could see a lot of enterprise-focused groups potentially getting behind OnlyOffice, but individual home users? Not so much.
EDIT: My mistake! I didn’t realize that there are standalone versions of OnlyOffice in addition to the web app version.
That’s kind of the bare bones of how it works, underneath all the abstraction layers and pretty GUIs.
Then it evolves.
First, you start splitting your code into multiple source files, either because your programs get too big to keep scrolling up and down one huge file to cross-check things, or because you want to incorporate someone else’s code into your program, and it’s more than just one or two functions you can easily copy and paste. You can still keep compiling and linking all of this in one step, but the command gets so long that you make a shell script/batch file as a shortcut.
After that, you might want to mix-and-match various source files to target different platforms, or to make other bulk changes, and you start going down the rabbit hole of having your shell script take arguments, rather than having a dozen different scripts. And then one day you take another look at “make” and realize that whereas before it seemed like impenetrable overengineering, it now makes complete and obvious sense to you.
Then you discover using “make” (or a similar utility) to split compilation and linking into separate steps, which used to seem nonsensical, but now you’re dealing with codebases that take more than a couple of seconds to compile, or precompiled libraries or DLLs, and you get comfortable with the idea of just hanging on to compiled object files and (re)using them when the source for that part of the program hasn’t changed.
And finally (maybe) you look at some of the crazy stuff in fancy IDEs and understand why it’s there; that it’s just representations of all this other stuff that you now know about and feel competent with. I say “maybe” because I’ve been programming for over 35 years, occasionally professionally but mostly as a hobbyist, and there are still things in IDEs that I either don’t understand, or don’t see the point of having them. But knowing the underlying principles makes me feel comfortable enough to ignore them.
I hadn’t heard of Kate before, so I can’t offer much hands-on advice. I dug around and found a “handbook” here: https://docs.kde.org/stable5/en/kate/kate/index.html
Unfortunately it does look like you need to define a project to compile/run anything, which appears to require manually creating a .kateproject
file in the directory as outlined here:
https://docs.kde.org/stable5/en/kate/kate/kate-application-plugin-projects.html#project-create
I had exactly the same problem when I moved from languages that were interpreted or combined the IDE and runtime environment into one, and starting to use languages which had their own external compiler. Unfortunately, open source project user documentation is often terrible for beginners (what I found above for Kate seems to be no exception), and IDEs often seem to be written by people who don’t really expect anyone to actually use the included build options (to be fair, most folks seem to like using their own separate build utilities, so probably this is often the case)
If you can tell us which compiler or interpreter you’re using (e.g. gcc, clang, Python), someone can probably tell you how to compile and/or run a single-file program from the terminal with a fairly simple command.
In communities for the Murderbot Diaries series of books, I sometimes see this game mentioned as a good fit for the feel of that universe. What I’ve seen in clips of playthroughs bears that out; I bought the game a while ago but haven’t gotten around to actually installing it yet.
Anyway, I just wanted to shout out the Murderbot series as something that folks may be interested in if they enjoyed this game’s world and are looking for something to read.
If you or anyone else is interested in playing more, I recommend:
I played a little of Silent Hill: Homecoming but got tired of it about 1/3 of the way through (I guess). I also bought Silent Hill: Downpour but gave up on that even more quickly. I don’t recommend either of them. Things introduced in the earlier games for specific psychological reasons related to the plot - especially sexy monster nurses and Pyramid Head - tend to be regurgitated in the later games for no real reason other than “Silent Hill”, which removes their impact completely.
I think I was kinda in the same boat as you.
In theory, I loved the fact that if you wanted to check, the game would tell you when you theoretically had enough information to identify one of the crew or passengers, so you knew where to focus your thinking. But I got stuck on some characters who seemed to me to be implied or hinted, but for whom I didn’t think I had positive proof.
I eventually got tired of continuously reviewing the same scenes over and over, looking for some detail that I had overlooked, and read a walkthrough to find out what I was missing. It seems that I hadn’t missed anything, and “an educated guess” was the standard expected by the game, not “definitive proof”. But I was burnt out with the game by that point and stopped playing.
The way I see it, there are two separate issues for discussion here.
The first is permanently altering a classic console. That’s an issue of historical preservation, and I’m not going to get into that.
The second issue is whether or not, being prepared to go as far as having removed the original optical drive, one might not just as well drop the console entirely and go the emulation route. To me, suggesting this shows a lack of understanding about how emulation works.
A real console consists of IC semiconductors and discrete components that propagate electrical fields and shuffle the occasional electron around. A software emulator is a bag of rules and tricks that tries to replicate the overall output of a console. Even FPGA-based emulators aren’t 100% perfect, because their gates and connections aren’t configured identically to the original hardware.
Game consoles are very complex systems that operate via the interplay of dozens of intricate subsystems. That’s why emulators start off supporting only a handful of games, and rarely reach 100% compatibility. Emulator developers are forever picking the next emulation inconsistency from the bug report list, tracking down what their emulator is doing differently to the original hardware, and then adding a new rule for dealing with that particular case. If they’re lucky, a couple of other games will also start working better. If they’re unlucky, a couple of other games will start working worse.
(For the interested, the author of BSNES wrote a detailed article about these issues for Ars Technica)
Take the Atari 2600. It’s a very old console that was very popular. The community has full schematics not just for the mainboard, but even the CPU and custom video chip. More patient people than me have sat for hours with oscilloscopes and test ROMs to probe the console inside and out. There are emulators that can play every game that was released back in the day without fault. Heck, the emulator I use is so advanced that you can set it to emulate specific revisions of the console with specific CRT TV parameters, and it will glitch in the same way that the game would glitch on that combination of hardware in real life. But it’s still not a “perfect” emulation! Homebrew developers are still finding quirks in the real 2600 hardware that the emulators don’t replicate, at least until the next update.
I have a PS2 which plays my games from an internal hard drive, and which has its output fed through an HDMI converter. Why don’t I just emulate it? Well, if you want to play FFX, or MGS2, or Ratchet & Clank, that’ll work great. Those are popular games, and emulator developers have put a lot of effort into making sure that the rules of their emulation work for those games. But I have dozens of more obscure games that have game-breaking glitches or don’t launch at all under emulation. And I also still have hundreds of discs that I don’t want to paw through, and are slowly degrading until one day they’ll no longer work, as well as an optical drive that gets a little closer to wearing out for good every time I use it, and a big, modern TV that hates analog inputs (not to mention no room for a bulky CRT). Getting the data into the console, and getting the final video and audio out, are both fairly well-understood and usually can be reimplemented reliably. But the heart of the console, where the data is turned into executing code, mixed with player input, and transformed into the output? That’s where the actual magic happens.
In my opinion, saying that if you’re going to replace an optical drive then you may as well just emulate the whole thing is a bit like saying that if you’re going to talk to Angela over the phone instead of in person, then you may as well just replace her with a well-trained AI chatbot.
Why bother? Because feeding data into the console and getting audio-visual signals out of it are both very well understood and can actually be replicated with essentially total accuracy. But the complex operations and subtle interactions of CPU, VDUs, RAM, and other support chips can’t. That’s the important part of the console, not the optical drive or the analog video output.
Software emulators and FPGA-based systems give it a good try, and can often run the majority of software for a console at an acceptable fidelity for most users, but they’re a long, long way from being 1:1 perfect, and the more recent the console, the more games either don’t run properly or don’t run at all.
deleted by creator