Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restore StorageContainer API #8518

Open
tomspilman opened this issue Oct 6, 2024 · 8 comments
Open

Restore StorageContainer API #8518

tomspilman opened this issue Oct 6, 2024 · 8 comments
Assignees

Comments

@tomspilman
Copy link
Member

Intent

We need to restore the old XNA StorageContainer APIs to provide a more generate way for developers to work with save data.

See the version last released in MG 3.5:

https://github.com/MonoGame/MonoGame/blob/v3.5/MonoGame.Framework/Storage/StorageContainer.cs

Motivation

We need a cross platform way for developers to access save data on PCs, consoles, and mobile.

While the StorageContainer has some old ideas (the AsyncCallback system) in general it does abstract file access well enough hide the complexity of console save game access.

Some extra features and documentation could complete things and make it a good solution.

What would need to happen:

  • Restore old classes for StorageContainer.
  • Convert it to use the partial class platform abstraction calls.
  • Refresh the implementation for PC saves that feels good for players and devs.
  • Be sure the PC implementation plays well with PC storefronts like Steam and GoG.
  • Implement Android and iOS considering current mobile save storage systems.
  • Implement for Switch, Xbox, and PlayStation.
  • Write new documentation to guide developers and suggest best practices with saves.

This could be made into one or more bounties if we need to accelerate the development.

@mrhelmut
Copy link
Contributor

mrhelmut commented Oct 6, 2024

I can see if I can share the implementation of our internal save data manager. It doesn't fit the StorageContainer API but it can likely provide insights toward how to manage save data throughout all platforms/consoles as a start.

What I did is a sort of in-memory-database system where you write to "named streams" which are virtual file containers, and when you do a "commit", it locks the streams for writing and starts a thread to call the platform specific API stuff (or just writes files on PC). It also prevents too many I/O on consoles where there is an I/O limit and issues warnings.

If nobody takes on it, I might have a look at it, but anyone is welcome as I don't know when I'll be able to afford that time.

@mrhelmut
Copy link
Contributor

mrhelmut commented Oct 7, 2024

Do we want to keep the exact same API? I can see some limitations in the original API. For instance, it assumes immediate writing of data when pushing something to a stream, which is not desirable on most platforms and there is no global "flush" command to control that.

I would slightly change the API to make it obvious that changes are first staged in-memory, and that you need to manually commit/flush them.

@CartBlanche
Copy link
Contributor

@mrhelmut I can have a stab at just bringing it back to life and as @tomspilman mentioned getting it to use the new partial classes structure. Then we can always enhance it down the line in a separate PR.
What do you think??

@CartBlanche CartBlanche self-assigned this Oct 7, 2024
@mrhelmut
Copy link
Contributor

mrhelmut commented Oct 7, 2024

Feel free to! I think we should get the API planned ahead and the rest can be done later (we just need to avoid the public API to move if we push it to a public release).

@tomspilman
Copy link
Member Author

I would slightly change the API to make it obvious that changes are first staged in-memory, and that you need to manually commit/flush them.

@mrhelmut

Yes that is the main limitation is no explicit "commit" call to flush results. We can add one to make the issue visible to developers, but i would make it optional to use it that way.

The design of the old API requires you Dispose the StorageContainer. And StorageContainer is designed to do multiple file operations (read/write files, create directories, enumerate files, etc).

So i think we could make the safe assumption that once StorageContainer is Disposed we commit changes (if there were any).

If the user's code doesn't Dispose... that is a bug and on them. If the user's code is written in a way that uses lots of StorageContainer creations to do a single "save" then they have code that could be optimized (but it will work). We can also recommend best practices in the docs to avoid these issues.

So i think it could work as is... thoughts?

@mrhelmut
Copy link
Contributor

mrhelmut commented Oct 7, 2024

IDisposable and flushing on dispose would work in most cases, although it might be a weird that Dispose may trigger an async operation.

Also, we might want to keep a StorageManager alive for the whole game session to avoid having to read files whenever we create an instance (or maybe having some sort of shared static cache of files already read to reduce I/O ops).

But yeah, I guess the public API may just work as-is. These are implementation concerns.

@tomspilman
Copy link
Member Author

although it might be a weird that Dispose may trigger an async operation.

Technically it even happens on FileStream if you dispose without closing/flushing it does it for you during Dispose. So I think it is ok.

We would still have a new explicit Commit() call in there for users that want to be explicit about it.

Also, we might want to keep a StorageManager alive for the whole game session to avoid having to read files whenever we create an instance

My worry about having it all in memory is I've seen some games with really big save data files. Now these game struggle on consoles, but you can do it.

So we may keep this caching to small file and hit disk for large ones.

But yeah, I guess the public API may just work as-is.

One thing we may have to deal with is that on some platforms you cannot keep save storage open for a long period of time.

Not sure fully yet how we enforce that beyond just documenting the limit and offering warnings on PC platforms.

@mrhelmut
Copy link
Contributor

mrhelmut commented Oct 8, 2024

What I have in mind and did internally, is that when you create the manager, save data are opened->read/cached->closed and then you access the cached data to avoid having too many I/O or leave the save data open.

But that assumes that save data are small enough to be directly cached. It is indeed not suitable for large save data.

Maybe we could give control to the caching system. E.g. an overload to OpenFile() with an optional parameter bool shouldCache = false, or something like CacheFiles(params string[] files) which would read and cache a batch of files and then a OpenFileFromCache().

We can likely implement threshold detections and generate warnings when certification would fail due to I/O ops.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants