-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fails to show new data when event files are replaced (e.g. w/ rsync) #349
Comments
How is it possible for a GCS event log to be gsutil rsync'd and then change afterwards? Event log filenames have timestamps in them. Files don't exist on GCS until |
I just spoke with @nfelt. It turns out TensorFlow does resumable uploads, so GCS event logs can be read before I assume you want |
I'm re-opening this issue since we've gotten a few other reports and they don't always involve GCS - this applies to usage of regular rsync and other syncing programs as well if they update files via replacement rather than by appending. We may or may not fix this but it's fair to track as an open issue. |
I'm using Seems one could use suggestions from here to watch for updates in folders from I'd be happy to contribute if there's a fix that the devs agree upon. |
@gokul-uf can you use Re: watching for updates, my understanding is that there is no single cross-platform way to do this, and it's impossible to do for remote filesystems that have no "filesystem notifications" API. We're making some changes to how we traverse logdirs right now so I wouldn't want to complicate things further, but down the road we might consider this approach as an optimization for the cases where it is supported. |
That should work for now, thanks! |
a short script implementing the #!/bin/bash
# TODO: allow these folders to be defined as environment variables
TB_LANDING_DIR="${HOME}/tb-landing"
TB_DIR="${HOME}/tb"
src="gs://$1"
tmp="${TB_LANDING_DIR}/$1"
dst="${TB_DIR}/$1"
echo "src: ${src}"
echo "landing on: ${tmp}"
echo "dst: ${dst}"
if [ ! -d "${tmp}" ]; then
mkdir -p "${tmp}"
fi
if [ ! -d "${dst}" ]; then
mkdir -p "${dst}"
fi
gsutil -m rsync -r ${src} ${tmp}
rsync --inplace -av ${tmp} ${dst}
# vim: set syntax=bash |
The second solution looks better for all developer with different platforms. And is it diffcult to realize the event file's change? |
Un, I finally found that the evnet generator never refresh itself. Genernators only send new events once eventfile created but not files deleted. I will try restarting the eventfile loaders. |
with self._generator_mutex:
# for event in self._generator.Load():
# self._ProcessEvent(event)
# try do fresh the cache
self.tensors_by_tag = {}
test_generator = _GeneratorFromPath(self.path, self.event_file_active_filter)
for event in test_generator.Load():
self._ProcessEvent(event)
if self.tensors_by_tag :
self.OutdatedData = self.tensors_by_tag
return self I rewrite that reload code. It works but wastes a lof of time when rebuild the generator |
@pindinagesh I would prefer to keep this open for the reasons I gave in the comment above #349 (comment) (which was many months after the PR you mentioned was merged). |
I am also facing the same issue and for now using rsync --inplace as a workaround on my institute's HPC server but I think tensorboard should do a smart lookup in the log directories and check if the event files have been deleted. Please let me know whenever this is fixed |
Update: as of about a month ago, TensorBoard nightly releases have an experimental This should handle the Other caveats:
Feedback is welcome, but please note that this functionality is experimental and any support for it will continue to be best-effort. |
Why not using inotify to watch directory? |
I think because that doesn't exist for object storages 🤷 (e.g. GCS). |
Problem summary
We have a problem that tensorboard (1.2.1) does not show new run data when we switched to syncing the data from Google Cloud Storage to local directory.
It looks like when the tensorboard reads an event file from local directory - it will not notice that the event file was deleted and recreated (which is quite valid case when you are using gsutil rsync to sync the data from Google Cloud Storage). I have a short-term workaround (to be tested) but I think there is a need to fiix it permanently in tensorboard - it's quite a valid case to handle due to another bug - #158)
Some context and problem manifestation
Initially we stored data from our ML runs in Google Cloud Storage and pointed our Tensorboard to reead the the data directly from there (logdir=gs://BUCKET/FOLDER) . We run the Tensorboard in Kubernetes cluster and made it available to all our team members via oauth proxy. This is great as we can always see fresh data and any member of the team can access it there. We wanted the data to refresh quite frequently - every few minutes. This worked very well but we found that there is a cost downside to it.
Due to #158 we had a lot of GCS accesses generated by TB just scanning GCS (several millions of accesses/day). This was not only slow but also incurred relatively high GCS cost (20 USD/4 days!).
Then we switched to another approach - we sync the data to a local folder from gcs using
gsutil -m rsync -d gs://BUCKET/FOLDER LOCAL_FOLDER
and then tensorboard points to that locally downloaded folder instead of pointing to gs:// .Additional information - we have the whole tree structure within that logdir directory so that we can see and compare all the different runs we've already produced.
Unfortunately after switching to rsync we seem to have big problem. The new runs (in new subfolders) are showing up amongst the others after the runs are executed, but they never show any data for those runs until tensorboard gets restarted. It looks like tensorboard picks up some initially synced partial results from the run - but not enough to show the data - and then it somehow blocks from reading any new data for that folder.
When I start an exact copy of Tensorboard at the same machine, it shows the data for those new runs properly - you can see the screenshot below showing the two tensorboards side-by-side. Tensorboard on the left is freshly started, tensorboard on the right has been run before the run has been started.
Result of investigation
I read in detail how gsutl rsync synchronizes the data and I guessed what might be wrong. I confirmed that by looking at the
lsof
output for the long-running tensorboard instance - and it becomes quite obvious (look at the "deleted" entries):The gsutil rsync works in the way that it always deletes and recreates files if they change - it never appends changes to the files. They state it in their manual, and it comes from the fact that the GCS files are in fact objects not files, and are immutable, so you don't even have a possibilty to transfer it partially - it's always all-or-nothing.
On the other hand it's quite reasonable what tensorboard does: when it has logdir set to the local folder, it opens the events* file and expects that tensorflow-based machine learning will continue appending data to it rather than delete and recreate the file when new data appears. It's completely resonable assumption - and I guess it's quite different when it accesses GCS directly (for the same reasons gsutil rsync only synchronizes full files).
You could see in the
lsof
output that we have problem with 20171108_b2ere44n and 20171108_kcjniy6q - in both cases the file is already deleted, but tensorboard still has it open (which is quite normal behaviour in linux/ext4 - the file/inode will only be physically removed after last process keeps it open).Potential solutions
I see several potential solutions to the problem, and it would be great if someone from tensorboard team can confirm whether the long term solution I proposed is good.
Short term workaround: I will setup double syncing
gs -> gsutil rsync -> linux rsync --inplace -> logdir
. I tested that I can make the standard linux rsync works in the way that it will not delete/recreate the files but it will replace the data in place (--inplace switch) - I also considered and tested behaviour of --append and --append-verify, but I think it's safer bet to do --inplace (append-verify will silently refuse syncin for file that does not match the content prefix in the old file). The process will sync the data first to some folder withgsutil rsync
and then sync the data to final destination with the linux'srsync --inplace
. The obvious downsides of this workaround is that it makes our process more complex and prone to error, and we have to keep two copies of our whole log directory in our kubernetes pod. Thats's quite bad long-term when we will have a lot more data to look at.Proper fix: I think tensorboard should be smarter in reading local data and check if the file has been deleted since it was last read and close/reopen/reread the file when it was. You could use fstat for that (see the second answer here https://stackoverflow.com/questions/12690281/check-if-an-open-file-has-been-deleted-after-open-in-python) Can this be done in next version please? Or maybe there is already a possible solution for current tensorboard (1.2.1) ?
The text was updated successfully, but these errors were encountered: