I would scrape them into individual json files with more info than you think you need, just for the sake of simplicity. Once you have them all, then you can work out an ideal storage solution, probably some kind of SQL DB. Once that is done, you could turn the json files into a .tar.zst and archive it, or just delete them if you are confident in processed representation.
Source: I completed a similar but much larger story site archive and found this to be the easiest way.
Yup, I think it'd work fine, especially if you want the ability to easily inspect individual items.
Any of the popular python yaml libraries will be more than sufficient. With a bit of work, you can marshal the input (when reading files back) into python (data)classes, making it easy to work with.