Thanks!
Just a quick answer :
1) I'm actualy storing the whole lower detail model in memory. But instead of doing it manualy in Blender, it's all automated. So I guess LOD is appropriate, lol. Mipmapping is also a form of LOD, so I do both techniques. Anyway, I will improve memory counsumption by sharing the vertices between the HQ and LQ models. You can find what games use this technique using the vdp1 debugger and looking at the sprites. Far away models will usualy merge 2 textures or more together. Few games that I know are using it : Sonic Jam, Tomb Raider, Quake, Wipeout 2097 and Duke Nukem.
As for the SS SDKs, none of them do it for a good reason : since you need to create a new sprite everytime, it would be a nightmare to manage, without mentionning all the DMA involved. On PS1, since they use texture coordinates, it's super easy. That being said, you could subdivide a quad on Saturn (like 64x64) to 4 times 32x32 without generating a new texture by playing with the memory adresses, width and height.
I don't plan to add that feature as it would be quite complicated to manage, but it could lead to better results with close quads, preventing warping and all.
2) for the fog effect, it's supported by SS hardware thanks to the vdp2 transparency. That's a feature I want to implement. One way I plan to do is to use color bank for far away objects, and normal vdp1 lut for close objects and put vdp2 transparency on CRAM sprites. I'm just not sure if there is a way to use automatic color calc ratio or if I need to calculate it myself. Worst case scenario if all fails is normal depth gouraud, but it doesn't play nice with some colors (white, blue, yellow, etc.).
Edit : About the limits, it's only the default SGL buffer memory allocation, you can increase it. But with a portal system or pvs the default values should be fine.