Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead. #284

Closed
picciama opened this issue Aug 26, 2022 · 2 comments
Labels

Comments

@picciama
Copy link

picciama commented Aug 26, 2022

Read the docs

Done. It isn't documented behaviour.


Describe the bug

The error message in title is returned when trying to analyze a tf.keras.Model using analyzer.analyze using tensorflow 2.

Steps to reproduce the bug

import innvestigate
import tensorflow as tf

model = tf.keras.Sequential([tf.keras.Input(10), tf.keras.layers.Dense(10, name='dense1')])
model.compile(run_eagerly=False, loss=tf.keras.losses.MeanSquaredError())

model.fit(x=np.arange(10).reshape(1, 10), y=np.arange(10).reshape(1,10))

analyzer = innvestigate.create_analyzer("gradient", model)
analyzer.analyze(np.arange(10).reshape(1, 10))

Expected behavior

The analyzer shouldn't throw an error. It should work, since v2.0.0 is supposed to work with tensorflow >= 2 which is eagerly executed. Therefore, the analyzer should make use of tf.GradientTape

Error Output

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Input In [10], in <cell line: 8>()
      6 model.fit(x=np.arange(10).reshape(1, 10), y=np.arange(10).reshape(1,10))
      7 analyzer = innvestigate.create_analyzer("gradient", model)
----> 8 analyzer.analyze(np.arange(10).reshape(1, 10))

File ~/miniconda3/envs/snap/lib/python3.9/site-packages/innvestigate/analyzer/network_base.py:250, in AnalyzerNetworkBase.analyze(self, X, neuron_selection)
    247 # TODO: what does should mean in docstring?
    249 if self._analyzer_model_done is False:
--> 250     self.create_analyzer_model()
    252 if neuron_selection is not None and self._neuron_selection_mode != "index":
    253     raise ValueError(
    254         f"neuron_selection_mode {self._neuron_selection_mode} doesn't support ",
    255         "'neuron_selection' parameter.",
    256     )

File ~/miniconda3/envs/snap/lib/python3.9/site-packages/innvestigate/analyzer/network_base.py:164, in AnalyzerNetworkBase.create_analyzer_model(self)
    161 self._analysis_inputs = analysis_inputs
    162 self._prepared_model = model
--> 164 tmp = self._create_analysis(
    165     model, stop_analysis_at_tensors=stop_analysis_at_tensors
    166 )
    167 if isinstance(tmp, tuple):
    168     if len(tmp) == 3:

File ~/miniconda3/envs/snap/lib/python3.9/site-packages/innvestigate/analyzer/reverse_base.py:269, in ReverseAnalyzerBase._create_analysis(self, model, stop_analysis_at_tensors)
    261 return_all_reversed_tensors = (
    262     self._reverse_check_min_max_values
    263     or self._reverse_check_finite
    264     or self._reverse_keep_tensors
    265 )
    267 # if return_all_reversed_tensors is False,
    268 # reversed_tensors will be None
--> 269 reversed_input_tensors, reversed_tensors = self._reverse_model(
    270     model,
    271     stop_analysis_at_tensors=stop_analysis_at_tensors,
    272     return_all_reversed_tensors=return_all_reversed_tensors,
    273 )
    274 ret = self._postprocess_analysis(reversed_input_tensors)
    276 if return_all_reversed_tensors:

File ~/miniconda3/envs/snap/lib/python3.9/site-packages/innvestigate/analyzer/reverse_base.py:242, in ReverseAnalyzerBase._reverse_model(self, model, stop_analysis_at_tensors, return_all_reversed_tensors)
    239 if stop_analysis_at_tensors is None:
    240     stop_analysis_at_tensors = []
--> 242 return igraph.reverse_model(
    243     model,
    244     reverse_mappings=self._reverse_mapping,
    245     default_reverse_mapping=self._default_reverse_mapping,
    246     head_mapping=self._head_mapping,
    247     stop_mapping_at_tensors=stop_analysis_at_tensors,
    248     verbose=self._reverse_verbose,
    249     clip_all_reversed_tensors=self._reverse_clip_values,
    250     project_bottleneck_tensors=self._reverse_project_bottleneck_layers,
    251     return_all_reversed_tensors=return_all_reversed_tensors,
    252 )

File ~/miniconda3/envs/snap/lib/python3.9/site-packages/innvestigate/backend/graph.py:1237, in reverse_model(model, reverse_mappings, default_reverse_mapping, head_mapping, stop_mapping_at_tensors, verbose, return_all_reversed_tensors, clip_all_reversed_tensors, project_bottleneck_tensors, execution_trace, reapply_on_copied_layers)
   1235 _print(f"[NID: {nid}] Reverse layer-node {layer}")
   1236 reverse_mapping = initialized_reverse_mappings[layer]
-> 1237 reversed_Xs = reverse_mapping(
   1238     Xs,
   1239     Ys,
   1240     reversed_Ys,
   1241     {
   1242         "nid": nid,
   1243         "model": model,
   1244         "layer": layer,
   1245         "stop_mapping_at_ids": local_stop_mapping_at_ids,
   1246     },
   1247 )
   1248 reversed_Xs = ibackend.to_list(reversed_Xs)
   1249 add_reversed_tensors(nid, Xs, reversed_Xs)

File ~/miniconda3/envs/snap/lib/python3.9/site-packages/innvestigate/analyzer/reverse_base.py:122, in ReverseAnalyzerBase._gradient_reverse_mapping(self, Xs, Ys, reversed_Ys, reverse_state)
    120 """Returns masked gradient."""
    121 mask = [id(X) not in reverse_state["stop_mapping_at_ids"] for X in Xs]
--> 122 grad = ibackend.gradients(Xs, Ys, reversed_Ys)
    123 return ibackend.apply_mask(grad, mask)

File ~/miniconda3/envs/snap/lib/python3.9/site-packages/innvestigate/backend/__init__.py:82, in gradients(Xs, Ys, known_Ys)
     77 if len(Ys) != len(known_Ys):
     78     raise ValueError(
     79         "Gradient computation failesd, Ys and known_Ys not of same length"
     80     )
---> 82 grad = tf.gradients(Ys, Xs, grad_ys=known_Ys, stop_gradients=Xs)
     83 if grad is None:
     84     raise TypeError("Gradient computation failed, returned None.")

File ~/miniconda3/envs/snap/lib/python3.9/site-packages/tensorflow/python/ops/gradients_impl.py:311, in gradients_v2(ys, xs, grad_ys, name, gate_gradients, aggregation_method, stop_gradients, unconnected_gradients)
    306 # Creating the gradient graph for control flow mutates Operations.
    307 # _mutation_lock ensures a Session.run call cannot occur between creating and
    308 # mutating new ops.
    309 # pylint: disable=protected-access
    310 with ops.get_default_graph()._mutation_lock():
--> 311   return gradients_util._GradientsHelper(
    312       ys, xs, grad_ys, name, True, gate_gradients,
    313       aggregation_method, stop_gradients,
    314       unconnected_gradients)

File ~/miniconda3/envs/snap/lib/python3.9/site-packages/tensorflow/python/ops/gradients_util.py:479, in _GradientsHelper(ys, xs, grad_ys, name, colocate_gradients_with_ops, gate_gradients, aggregation_method, stop_gradients, unconnected_gradients, src_graph)
    477 """Implementation of gradients()."""
    478 if context.executing_eagerly():
--> 479   raise RuntimeError("tf.gradients is not supported when eager execution "
    480                      "is enabled. Use tf.GradientTape instead.")
    481 ys = _AsList(ys)
    482 xs = _AsList(xs)

RuntimeError: tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead.

Platform information

  • OS: Debian 11
  • Python version: 3.9
  • iNNvestigate version: 2.0.0
  • TensorFlow version: 2.9.1
@picciama picciama added the bug label Aug 26, 2022
@adrhill
Copy link
Collaborator

adrhill commented Aug 26, 2022

Hi @picciama,

thanks for your interest in the package and the thorough issue.

The original iNNvestigate 1.0 was written on top of TF1 and Keras (among other backends) and works by inverting the computational tree of the model to create an analyzer. To ensure compatibility with existing code and to have identical outputs between iNNvestigate 1 and 2, we've kept this graph-based approach.

I will open a separate issue with your feature request to support GradientTape. Contributions are more than welcome and this issue might just take a little tweak to this function.

In the meantime, I'll make the current requirement of using tf.compat.v1.disable_eager_execution() more obvious in the readme.

@adrhill
Copy link
Collaborator

adrhill commented Aug 26, 2022

Tracked in #285.

@adrhill adrhill closed this as completed Aug 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants