-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update tensorflow ? #41
Comments
Tensorflow 1.15 -> max python version = 3.7 (tensorflow/tensorflow#39768 (comment)) |
I have tried to do this, as has a user seperately. There is a bug in keras which prevents the model from being loaded properly in more recent versions of keras and the keras team has stated they dont have the resources to patch it. The plan now is to train future versions of DeepSlice in tf 2.0, and then phase out the current models. But theres no timeline on that at present. |
Ok, good to know! I'll see what I can do. |
What do you think if the idea of building a small command line interface around DeepSlice ? I do not know how complex that is, but this could help having DeepSlice available while using more recent python versions in difference environments. There's also the fancy client-server architecture with a local server, (that you also have implement on the deepslice website), but at first sight it looks more complex... |
Would this be preferable to an API? I know the API has been planned for some time but I want to get started soon. |
You mean an API to the DeepSlice server ? Or an API to the current DeepSlice code (in which case you already have one, no?) Just in case that's useful this tool by @ksugar shows a nice example of client server architecture (https://github.com/ksugar/qupath-extension-sam) that's enabling to use SAM from another application (QuPath). To me the server way looks a bit more complex, that's why I suggested a CLI, but I do not have a strong preference. If anything, I like if things are simple to install... |
API to the DeepSlice server :) i think thats actually the easiest as it already has a backend that accepts images from the web page, i just need to modify that somewhat. |
Hey, me again. I also would appreciate the ability to update to a newer python version in my DeepSlice conda environment, as all other image processing in the same environment is extremely slow compared to running the same code with python 3.10 or above. I would like to avoid having separate environments for separate tasks in the pipeline, as then my users need to install multiple environments. As they are biologists, even making them install one environment is a bit of a chore. I have looked into the issue a little bit, trying to see if I could come up with a workaround. The core issue seems to me a mismatch between the outputs in of the Specifically, I think that in your old code, the Have you tried loading the weights with tf2 and with the xception model unaltered? I would make a bet that it will work. |
Shame, thanks for trying though. |
Just FYI, I gave a try to the CLI and it's very easy. I wrote a small script that I put in the python lib: In the end, because DeepSlice does not require to transfer gigantic images, I'm pretty happy with it, data transfer is not a bottleneck. The conda env living separately prevents issues with the combination of tools, and (as far as I'm concerned), this simplifies a lot the matching with Java. (side note: I've finally made the transition from xml parsing to json, but I've not released the new version yet. Anyway I'm ready to drop xml support, which was a pain) |
First: This package is awesome. Thank you so much for putting it together. I appear to have a working version of DeepSlice with Tensorflow 2.13 and Python 3.10. I found a couple of bugs along the way that I will try to detail here. As you pointed out above, the main problem is that the interface you used to remove the xception top layers in Tensorflow 1.X is no longer available. Specifically these lines: base_model._layers.pop()
base_model._layers.pop() As you pointed out here #41 (comment) , this has some weird behavior. There appear to be two main issues here. First, while this command does indeed remove layers from the _layers list in the model, it really is kind of hiding those layers and most likely taking them out of the training interface. However, by inspecting base_model.__dict__ You will see that references to these layers still exists, and this is actually creating some kind of separate dead-end node path in parallel. Second, I believe this double-layer-pop was intended to remove 1) the softmax layer, and then 2) the 1000 element dense prediction element underneath. Because the softmax is actually rolled into the dense prediction layer, this is actually removing (but not actually removing) the prediction and global average pooling layer beneath it. If we try to do this the normal interface in modern TF, we will error eventually during prediction base_model = Xception(include_top=True, weights=xception_weights)
base_model = Model(inputs=base_model.input, outputs=base_model.layers[-3].output,name="xception") Because the top feature tensor from this is 10 x 10 x 2048, not 2048 as intended. As you pointed out in your keras issue, we can get weights loaded by removing only the top layer, and then using some combination of loading weight options so that they are loaded by name. But some layers aren't going to load, and the results are not going to be accurate. The issue here is that while the layer-popping method successfully removed the layer from our save file, the 1000 element prediction and softmax layer from Xception very much still exists. Indeed, if you look at your saved weights file for the first Dense layer you add, it is 1000 x 256, not 2048 x 256. So with the weight files as they exist, Keras will either not load the first Dense Layer you added, or not loaded the exception weights (because it wants the missing prediction layer). I fixed this by recreating the model and adding the prediction layer into the weight files like this: from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras import Model
import tensorflow as tf
from tensorflow.keras.applications.xception import Xception
import h5py
allen_weight_path = '../../DeepSlice/metadata/weights/Allen_Mixed_Best.h5'
new_allen_weight_path = '../../DeepSlice/metadata/weights/Allen_Mixed_Best_2.h5'
with h5py.File(new_allen_weight_path,'w') as f_dest:
with h5py.File(allen_weight_path,'r') as f_src:
for x in f_src['xception'].items():
name = x[0]
f_src.copy(f_src['xception'][name],f_dest,name)
base_model = Xception(include_top=True, weights=xception_weights)
base_model = Model(inputs=base_model.input, outputs=base_model.output,name="xception")
model = Sequential()
model.add(base_model)
model.add(Dense(256, activation="relu"))
model.add(Dense(256, activation="relu"))
model.add(Dense(9, activation="linear"))
base_model.load_weights(new_allen_weight_path,by_name=True)
model.load_weights(allen_weight_path,by_name=True,skip_mismatch=True)
model.save_weights(new_allen_weight_path) After doing this for allen and the synthetic weights, now my weight file matches the model created with the following in Tensorflow 2.x compliant format: base_model = Xception(include_top=True, weights=xception_weights)
base_model = Model(inputs=base_model.input, outputs=base_model.output,name="xception")
if species == "rat":
inputs = Input(shape=(299, 299, 3))
base_model_layer = base_model(inputs, training=True)
dense1_layer = Dense(256, activation="relu")(base_model_layer)
dense2_layer = Dense(256, activation="relu")(dense1_layer)
output_layer = Dense(9, activation="linear")(dense2_layer)
model = Model(inputs=inputs, outputs=output_layer)
else:
model = Sequential()
model.add(base_model)
model.add(Dense(256, activation="relu"))
model.add(Dense(256, activation="relu"))
model.add(Dense(9, activation="linear"))
if weights != None:
model.load_weights(weights,by_name=True,skip_mismatch=True)
return model And if I run the example notebook, my results seem quite good (although not quite identical). Interestingly, I am pretty confident that the prediction layer in the working model that gives such good results is actually the one loaded by default and was not fine tuned. The rest of your model kind of trained around it I guess? That also means there is a softmax wedged in the middle there too, which is probably subtracting a few percent from your possible performance. I am happy to send the corrected H5 files of weights / make a pull request, etc. Let me know how I can help! We are very excited to start using this package. Paul |
Thats amazing, thank you. Insane that it is training with a softmax layer inbetween... this is really motivating to train a new model, as you mentioned theres a lot of performance being left on the table. The only hesitation I have regarding updating is that it would be worsening the current models results until we can retrain. How possible do you think it is to replicate the current model in TF2 (softmax and all). The aim would be to replicate the current performance (within some reasonable margin). This is a really great contribution, I cant thank you enough! |
The TF2 version of the model "as is" (with the softmax) has slightly different numbers on your example brain output, but "by eye" looks the same to me. I will send you an email with the TF2 compatible weights so that you can test on some of your validation data. |
Just wanted to note for anyone implementing this solution, there is one change I was hung up on. |
Hello :) The way it's written it doesn't work and I both defined
Just to make it work, but when I try to run the predictions (based on the provided demo notebook) I get the following error
I assume it's because of setting xception_weights to imagenet or None, but if anyone thinks it's not related, I would like to know, because perhaps there's something I am missing because I don't understand. Any help is more than welcome :) Big thanks! |
I had to make some modifications to the weight file and then use that new weight file to be TF2 compatible. That is probably why there is a layer name that you don't recognize. The link to downloading those new weights may not have been merged (I just got back from a long paternity leave and I'm not sure quite sure where this was left off). Could you email me and I can send you a download link to the new TF2 weights file in the meantime? |
This should be addressed by PR #52 |
Solved by @wjguan :) |
Hello !
I'm trying to see if I can update some dependencies of abba_python (pyimagej, python version, etc.) but DeepSlice relies on tensorflow 1.15 and this seems to be the bottleneck here.
I know that such an update is complicated. Do you plan to update tensorflow at some point, or is it such a pain that this won't happen ?
If this will not happen, I'll probably try (one day) to go for a client-(local) server architecture.
Cheers,
Nico
The text was updated successfully, but these errors were encountered: