Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

InvalidArgumentError: You must feed a value for placeholder tensor #283

Closed
vevefu opened this issue Aug 22, 2022 · 7 comments
Closed

InvalidArgumentError: You must feed a value for placeholder tensor #283

vevefu opened this issue Aug 22, 2022 · 7 comments
Labels

Comments

@vevefu
Copy link

vevefu commented Aug 22, 2022

Hi!

Thanks for this great library!

I am using tensorflow version 2.9.1. and innvestigate 2.0.0 and get the following error message when trying to analyze a keras model (for example a simple CNN):

InvalidArgumentError: You must feed a value for placeholder tensor 'input_1_3' with dtype float and shape [?,3,1]
[[{{node input_1_3}}]]

grafik

The error appears when running innvestigate.analyze() (so in the last line of the code below), while the analyzer is created without a problem.

Any ideas what I might be doing wrong?

import numpy as np
import tensorflow as tf
import innvestigate

from keras.models import Sequential
from keras.layers import Dense, Flatten
from keras.layers.convolutional import Conv1D 
from keras.layers.convolutional import MaxPooling1D

tf.compat.v1.disable_eager_execution() 

X=np.random.rand(100,3,1)
y=np.random.rand(100) 
X_train, X_test = np.vsplit(X,[80])
y_train, y_test = np.split(y,[80])

model = Sequential()
model.add(Conv1D(filters=64, kernel_size=2, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
      
model.fit(X_train, y_train, epochs=10,verbose=0)
        
explainer_lrp = innvestigate.create_analyzer("gradient", model)
explanations_lrp=explainer_lrp.analyze(X_test)
@adrhill adrhill added the bug label Aug 22, 2022
@adrhill
Copy link
Collaborator

adrhill commented Aug 25, 2022

Hi @vevefu,

I checked your example and it turns out you just need to pass the input_shape to the first layer of your Sequential model:

import numpy as np
import tensorflow as tf
import innvestigate

from keras.models import Sequential
from keras.layers import Dense, Flatten
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D

tf.compat.v1.disable_eager_execution()

input_shape = (3, 1)
X = np.random.rand(100, *input_shape)
y = np.random.rand(100)
X_train, X_test = np.vsplit(X, [80])
y_train, y_test = np.split(y, [80])

model = Sequential()
model.add(Conv1D(filters=64, kernel_size=2, activation="relu", input_shape=input_shape))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(5, activation="relu"))
model.add(Dense(1))
model.compile(optimizer="adam", loss="mse")

model.fit(X_train, y_train, epochs=10, verbose=0)

explainer_lrp = innvestigate.create_analyzer("gradient", model)
explanations_lrp = explainer_lrp.analyze(X_test)
print(explanations_lrp)

@adrhill adrhill closed this as completed Aug 25, 2022
@vevefu
Copy link
Author

vevefu commented Sep 5, 2022

Thank you very much!

@madarax64
Copy link

Hello,
Thanks for the brilliant work! Unfortunately, I'm also having the same error, even though I explicitly specify the input_shape argument as described.
Minimum working example:

import numpy as np
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
import tensorflow.keras as keras
from tensorflow.keras.models import Sequential, Model

import innvestigate

if __name__ == "__main__":
	x_train = np.random.random((100,512,1))
	y_train = np.random.choice([0,1], 100)

	model = Sequential()
	model.add(keras.layers.Conv1D(filters=128, kernel_size=8, padding='same', input_shape=(512,1)))
	model.add(keras.layers.BatchNormalization())
	model.add(keras.layers.Activation(activation='relu'))

	model.add(keras.layers.Conv1D(filters=256, kernel_size=5, padding='same'))
	model.add(keras.layers.BatchNormalization())
	model.add(keras.layers.Activation('relu'))

	model.add(keras.layers.Conv1D(128, kernel_size=3,padding='same'))
	model.add(keras.layers.BatchNormalization())
	model.add(keras.layers.Activation('relu'))

	model.add(keras.layers.GlobalAveragePooling1D())
	model.add(keras.layers.Dense(2, activation='softmax'))
	model.compile(optimizer='adam',loss='sparse_categorical_crossentropy')

	model.fit(x_train, y_train, epochs=10, batch_size=32)
	model = innvestigate.model_wo_softmax(model)

	analyzer = innvestigate.create_analyzer("gradient", model)
	
	a = analyzer.analyze(x_train)

And I get the following error:

C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\tensorflow\python\client\session.py:1480: FutureWarning: Passing (type, 1) or '1type' as a 
synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  ret = tf_session.TF_SessionRunCallable(self._session._session,
Traceback (most recent call last):
  File "minimal_example.py", line 37, in <module>
    print(a.shape)
  File "C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\innvestigate\analyzer\network_base.py", line 250, in analyze
    self.create_analyzer_model()
  File "C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\innvestigate\analyzer\network_base.py", line 196, in create_analyzer_model
    self._analyzer_model = kmodels.Model(
  File "C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\tensorflow\python\training\tracking\base.py", line 629, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\keras\engine\functional.py", line 146, in __init__
    self._init_graph_network(inputs, outputs)
  File "C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\tensorflow\python\training\tracking\base.py", line 629, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\keras\engine\functional.py", line 181, in _init_graph_network
    base_layer_utils.create_keras_history(self._nested_outputs)
  File "C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\keras\engine\base_layer_utils.py", line 175, in create_keras_history
    _, created_layers = _create_keras_history_helper(tensors, set(), [])
  File "C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\keras\engine\base_layer_utils.py", line 253, in _create_keras_history_helper       
    processed_ops, created_layers = _create_keras_history_helper(
  File "C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\keras\engine\base_layer_utils.py", line 253, in _create_keras_history_helper       
    processed_ops, created_layers = _create_keras_history_helper(
  File "C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\keras\engine\base_layer_utils.py", line 253, in _create_keras_history_helper       
    processed_ops, created_layers = _create_keras_history_helper(
  [Previous line repeated 31 more times]
  File "C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\keras\engine\base_layer_utils.py", line 251, in _create_keras_history_helper       
    constants[i] = backend.function([], op_input)([])
  File "C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\keras\backend.py", line 4275, in __call__
    fetched = self._callable_fn(*array_vals,
  File "C:\Users\madarax64\Anaconda3\envs\test\lib\site-packages\tensorflow\python\client\session.py", line 1480, in __call__
    ret = tf_session.TF_SessionRunCallable(self._session._session,
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'conv1d_input' with dtype float and shape [?,512,1]
         [[{{node conv1d_input}}]]

My environment details are:
Python 3.8
Tensorflow 2.8.0
Innvestigate 2.0.0
Windows 10

Could you kindly let me know how to fix this?

@adrhill
Copy link
Collaborator

adrhill commented Oct 11, 2022

Hi @madarax64,

I was able to narrow your issue down to the use of BatchNormalization layers. This is indeed a bug and I opened #292 to track it.

As a temporary workaround until we've fixed this issue, you could try to convert your BatchNormalization layers to Dense layers after training the model.

@madarax64
Copy link

Hello @adrhill ,
Thanks for the response. I'm not sure how to convert the BN layers to dense layers?

@adrhill
Copy link
Collaborator

adrhill commented Oct 12, 2022

During inference, BatchNormalization layers just linearly scale (normalize) their inputs. They implement:

gamma * (batch - self.moving_mean) / sqrt(self.moving_var+epsilon) + beta

You could define Dense layers that implement the same scaling by manually setting the corresponding weights and biases.
You can read more here.

@madarax64
Copy link

Okay, got it. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants