Sega Master System / Mark III / Game Gear|
SG-1000 / SC-3000 / SF-7000 / OMV
Home - Forums - Games - Scans - Maps - Cheats - Credits
Music - Videos - Development - Hacks - Translations - Homebrew
Scanning and publishing my collectionPosted: Sat Aug 29, 2020 10:38 am
Better late than never, I have started getting my collection scanned.
Will do it in batch with someone's help (been hiring a kid to help). First batch was done in July-August, next batch I don't know when.
Discussed this widely with Joseph from Game Preservation Society, and following guidelines from
My process is not nearly as thorough as Game Preservation Society's process, but I think it's an acceptable, pragmatic compromise given available resources.
- Using an Epson Perfection V750 Pro
- Calibrated the scanned with IT8 Color Chart + Argyll CMS tool with GamePres help (the calibration process is tricky).
- Every scan include a Q-13 Color Chart on the picture.
- Scanning with VueScan, Raw TIFF, 24-bits, at 1200 DPI, each scan is about 200-300 megabytes.
I started with Game Gear games because:
- They generally tend to be poorly scanned (vs SMS games)
- Flattening them gives decent result.
A better process would technically be to use a heatgun and unglue all boxes and flatten them thoroughly. This may be needed to make good scans for Mark III, SG-1000, SC-3000 games. I'm not sure what I'll be doing for those yet.
So far we managed to scans:
- 99% of the Game Gear USA set (missing minor variations)
- 99% of the Game Gear European set
- 80% of the Game Gear Brasilian set
- Korean games I have which come with an inlay (about 80 of them)
See attached pictures.
I have then been thinking that our bottleneck to publish scans is not so much the scanning but the process of cleaning, cropping, resizing, publishing and in a way that is flexible, reproducible, automatable where possible. I think all the preservation community is currently very poorly equiped and our sharing, publishing, preservation process are rather poor. Game Preservation Society are using dozens of adhoc Adobe/Photoshop scripts for that purpose.
I am currently contemplating writing a custom tool and design a file format for this purpose. The idea is that the tool would allow to mark shapes and annotations associated to a raw TIFF file, and then based on those annotations you can e.g. summon ImageMagick via the tool to automatically apply calibration profile/rotate/crop/export scans in whichever sizes you want. e.g. Export the Front side every box as 1000 pixels wide JPG.
The core idea is to steer clear of the "person scan something, vaguely crop and resize it and publish that" where the high-resolution data stays on their hard-drive, shared to a few people, then eventually lost. We can only achieve that through a standardized process.
In our process the TIFF files would never be edited or saved past the initial scan. We would keep a pristine untouched TIFF files (with SHA checksum, PAR recovery data) and metadata for locating elements are saved in a separate, text based file. This will not only clarify the process but also faclitate long term sharing, as it will become easier and more attractive to be sharing raw, uncropped, lossless TIFF files + metadata text file since they'll be the natural source data to automatically output data for display/web/applications. The metadata file can evolve and be versioned, the TIFF file doesn't. Every other files (for publication on the web etc.) can be recreated from those two. And then for actual cleaning of the picture, we CAN duplicate the TIFF file and alter pixels but this can be done anytime later in the pipe, aka someone can do the tagging and work of marking regions, and that work can be reused for automatic recropping/export if someone decide to clean the scans 5 years later. Since realistically we may not be able to clean all scans prior to publication, being able to do the work out of order is beneficial.
Similarly once we have the metadata we can easily automate the process of creating the db or wiki files needed for publication here (our current web framework requires creating a bunch of wiki data which can be a little tedious).
That's an idea at this point but I will try to make this happen.
Manuals will be another problem...
||Posted: Tue Sep 15, 2020 5:19 am|
|This is awesome! The workflow and ideal tool you describe seem like they could really avoid repeated effort for many years and ensure good results|